Markets

Biden’s AI Executive Order Could Help $25 Billion Startup Anthropic

4 Mins read

The White House’s executive order on AI aims to ensure the technology is “Safe, Secure, and Trustworthy.”

My guess is this recent directive will create winners and losers among the public companies and startups competing for their slice of the multi trillion-dollar Generative AI ecosystem.

The winners likely will be companies like Anthropic, a startup that focuses on safety. The San Francisco-based provider of Claude 2 — a foundation model competing against OpenAI’s ChatGPT — is working under the banner of making AI that is “helpful, harmless and honest,” according to The New York Times

NYT
.

Unlike companies that had been hoping to shift Generative AI’s societal costs away from their investors, Anthropic’s mission is consistent with the values in the executive order.

Therefore, my guess is Anthropic will welcome the help from the Biden administration improving Claude’s safety while rivals hoping to avoid government intrusion into their operations will view the order as an unwelcome anchor on their growth.

Key Details In The AI Executive Order

On October 30, President Biden signed the AI executive order at the White House. The Biden administration diverged from past examples of a hands off policy toward many other technologies and made “the federal government a major player in the development and deployment of AI systems that can mimic or perhaps someday surpass the creative abilities of human beings.” according to the Boston Globe,

The EO does the following:

  • Initiates government safety monitoring. The EO asserts the government’s right to oversee the development of future AI systems to limit their risk to national security and public safety. Developers of such systems must notify the government when they begin building them and share the results of any safety tests they conduct on the AI systems, the Globe noted.
  • Sets new safety standards. The EO tasks government agencies with setting new standards for AI, “aimed at protecting privacy, fending off fraud, and ensuring that AI systems don’t reinforce human prejudices,” the Globe reported. In addition, “The Department of Commerce will set standards for ‘watermarking’ AI-generated images, text, and other content to prevent its use in fraudulent documents or ‘deep fake’ images,” the Globe reported.

This EO raises many questions:

  • What criteria will the government use to decide which future AI systems must comply with the standards?
  • Which government agencies will enforce the standards and monitor compliance with them?
  • Does the government have a sufficient number of trained people who can create the standards and assess whether AI companies are complying?
  • What penalties, if any, will the government impose on companies that do not comply with the EO?

Why The Executive Order Could Benefit Anthropic

Biden’s EO will help companies already taking action to protect society from the risks of Generative AI. The reason is if the EO is carried out with sufficient resources, it will help such companies realize their mission.

This comes to mind in considering Anthropic which is a provider of the foundation models used to build Generative AI chatbots. In 2021, siblings Daniela and Dario Amodei, who previously were OpenAI executives, started Anthropic out of concern their employer cared more about commercialization than safety, according to Cerebral Valley.

Anthropic is a roaring success. By October 2023, the 192-employee company had raised a total of $7.2 billion and its valuation reached $25 billion – five times more than its value in May.

With clients including Slack, Notion and Quora, Anthropic’s 2023 revenue will double to $200 million Pitchbook forecasts. The Information reported the company expects to reach $500 million in revenue by the end of 2024.

Caring About Customers And Communities

The key to Anthropic’s success is its cofounders’ concern for making Generative AI safe for its customers and communities. Anthropic’s cofounders Dario Amodei, a Princeton-educated physicist who led the OpenAI teams that built GPT-2 and GPT-3, became Anthropic’s CEO. His younger sister, Daniela Amodei, who oversaw OpenAI’s policy and safety teams, became Anthropic’s president. As Daniela said, “We were the safety and policy leadership of OpenAI, and we just saw this vision for how we could train large language models and large generative models with safety at the forefront,” the Times wrote.

Anthropic’s co founders put their values into their product. The company’s Claude 2 – a rival to ChatGPT – could summarize larger documents and was able to produce safer results. Claude 2 could summarize up to about 75,000 words – the length of a typical book. Users input large data sets and requested summaries in the form of a memo, letter or story. ChatGPT could handle a much smaller input of about 3,000 words, the Times reported.

Arthur AI, a machine learning monitoring platform, concluded Claude 2 had the most “self-awareness” – meaning it accurately assessed its knowledge limits and only answered questions for which it had training data to support, CNBC wrote.

Anthropic’s concern about safety caused the company not to release the first version of Claude — which the company developed in 2022 — because employees were afraid people might misuse it. Anthropic delayed the release of Claude 2 because the company’s red-teamers uncovered new ways it could become dangerous, according to the Times.

Using A Self-Correcting Constitution To Build Safer Generative AI

When the Amodeis started the company, they thought Anthropic would do safety research using other companies’ AI models. They soon concluded innovative research was only possible if they built their own models. That would be possible only if they raised hundreds of millions of dollars to afford the expensive computing equipment required to build the models. They decided Claude should be helpful, harmless and honest, the Times wrote.

To that end, Anthropic deployed Constitutional AI – the interaction between two AI models. One of them is operating according to a written list of principles from sources such as the UN’s Universal Declaration of Human Rights. The second one can evaluate how well the first one followed its principles — correcting it when necessary, the Times noted.

In July 2023, Daniela Amodei provided examples of Claude 2′s improvements over the prior version. Claude 2 scored 76.5% on the bar exam’s multiple-choice section, up from the earlier version’s 73%. The newest model scored 71 percent on the Python coding test, up from the prior version’s 56%. Amodei said Claude 2 “was twice as good at giving harmless responses,” CNBC wrote.

Because Anthropic’s product is built to bring helpful, harmless and honest values to Generative AI, society could be better off — perhaps with help from Biden’s Executive Order.

Read the full article here

Related posts
Markets

U.K. pension funds to disclose domestic investment as London stock market falters

1 Mins read
Chancellor Jeremy Hunt on Saturday said U.K. pensions will have to disclose how much they have invested domestically, in a move meant…
Markets

Why the stock market ‘doesn’t look very bubbly’ to Ray Dalio right now

2 Mins read
“‘When I look at the U.S. stock market using these criteria, it — and even some of the parts that have rallied…
Markets

S&P 500 scores gains last seen in 1971 as AI hopes fuel ‘second’ leg of rally

1 Mins read
U.S. stocks kicked off March in fresh record territory, with the S&P 500 clinching another big week of gains.  On Friday the…
Get The Latest News

Subscribe to get the top fintech and
finance news and updates.

Leave a Reply

Your email address will not be published. Required fields are marked *