Government & Policy

California’s legislature just passed AI bill SB 1047; here’s why some hope the governor won’t sign it

Comment

California Senator Scott Wiener's bill SB 1047 tries to prevent an AI disaster.
Image Credits: Bryce Durbin

Update: California’s Appropriations Committee passed SB 1047 with significant amendments that change the bill on Thursday, August 15. You can read about them here.

Outside of sci-fi films, there’s no precedent for AI systems killing people or being used in massive cyberattacks. However, some lawmakers want to implement safeguards before bad actors make that dystopian future a reality. A California bill, known as SB 1047, tries to stop real-world disasters caused by AI systems before they happen. It passed the state’s senate in August, and now awaits an approval or veto from California Governor Gavin Newsom.

While this seems like a goal we can all agree on, SB 1047 has drawn the ire of Silicon Valley players large and small, including venture capitalists, big tech trade groups, researchers and startup founders. A lot of AI bills are flying around the country right now, but California’s Safe and Secure Innovation for Frontier Artificial Intelligence Models Act has become one of the most controversial. Here’s why.

What would SB 1047 do?

SB 1047 tries to prevent large AI models from being used to cause “critical harms” against humanity.

The bill gives examples of “critical harms” as a bad actor using an AI model to create a weapon that results in mass casualties, or instructing one to orchestrate a cyberattack causing more than $500 million in damages (for comparison, the CrowdStrike outage is estimated to have caused upwards of $5 billion). The bill makes developers — that is, the companies that develop the models — liable for implementing sufficient safety protocols to prevent outcomes like these.

What models and companies are subject to these rules?

SB 1047’s rules would only apply to the world’s largest AI models: ones that cost at least $100 million and use 10^26 FLOPS (floating point operations, a way of measuring computation) during training. That’s a huge amount of compute, though OpenAI CEO Sam Altman said GPT-4 cost about this much to train. These thresholds could be raised as needed.

Very few companies today have developed public AI products large enough to meet those requirements, but tech giants such as OpenAI, Google, and Microsoft are likely to very soon. AI models — essentially, massive statistical engines that identify and predict patterns in data — have generally become more accurate as they’ve grown larger, a trend many expect to continue. Mark Zuckerberg recently said the next generation of Meta’s Llama will require 10x more compute, which would put it under the authority of SB 1047.

When it comes to open source models and their derivatives, the bill determined the original developer is responsible unless another developer spends another $10 million creating a derivative of the original model.

The bill also requires a safety protocol to prevent misuses of covered AI products, including an “emergency stop” button that shuts down the entire AI model. Developers must also create testing procedures that address risks posed by AI models, and must hire third-party auditors annually to assess their AI safety practices.

The result must be “reasonable assurance” that following these protocols will prevent critical harms — not absolute certainty, which is of course impossible to provide.

Who would enforce it, and how?

A new California agency, the Board of Frontier Models, would oversee the rules. Every new public AI model that meets SB 1047’s thresholds must be individually certified with a written copy of its safety protocol.

The Board of Frontier Models, would be governed by nine people, including representatives from the AI industry, open source community and academia, appointed by California’s governor and legislature. The board will advise California’s attorney general on potential violations of SB 1047, and issue guidance to AI model developers on safety practices.

A developer’s chief technology officer must submit an annual certification to the board assessing its AI model’s potential risks, how effective its safety protocol is and a description of how the company is complying with SB 1047. Similar to breach notifications, if an “AI safety incident” occurs, the developer must report it to the FMD within 72 hours of learning about the incident.

If a developer’s safety measures are found insufficient, SB 1047 allows California’s attorney general to bring an injunctive order against the developer. That could mean the developer would have to cease operating or training its model.

If an AI model is actually found to be used in a catastrophic event, California’s attorney general can sue the company. For a model costing $100 million to train, penalties could reach up to $10 million on the first violation and $30 million on subsequent violations. That penalty rate scales as AI models become more expensive.

Lastly, the bill includes whistleblower protections for employees if they try to disclose information about an unsafe AI model to California’s attorney general.

What do proponents say?

California State Senator Scott Wiener, who authored the bill and represents San Francisco, tells TechCrunch that SB 1047 is an attempt to learn from past policy failures with social media and data privacy, and protect citizens before it’s too late.

“We have a history with technology of waiting for harms to happen, and then wringing our hands,” said Wiener. “Let’s not wait for something bad to happen. Let’s just get out ahead of it.”

Even if a company trains a $100 million model in Texas, or for that matter France, it will be covered by SB 1047 as long as it does business in California. Wiener says Congress has done “remarkably little legislating around technology over the last quarter century,” so he thinks it’s up to California to set a precedent here.

When asked whether he’s met with OpenAI and Meta on SB 1047, Wiener says “we’ve met with all the large labs.”

Two AI researchers who are sometimes called the “godfathers of AI,” Geoffrey Hinton and Yoshua Bengio, have thrown their support behind this bill. These two belong to a faction of the AI community concerned about the dangerous, doomsday scenarios that AI technology could cause. These “AI doomers” have existed for a while in the research world, and SB 1047 could codify some of their preferred safeguards into law. Another group sponsoring SB 1047, the Center for AI Safety, wrote an open letter in May 2023 asking the world to prioritize “mitigating the risk of extinction from AI” as seriously as pandemics or nuclear war.

“This is in the long-term interest of industry in California and the US more generally because a major safety incident would likely be the biggest roadblock to further advancement,” said director of the Center for AI Safety, Dan Hendrycks, in an email to TechCrunch.

Recently, Hendrycks’ own motivations have been called into question. In July, he publicly launched a startup, Gray Swan, which builds “tools to help companies assess the risks of their AI systems,” according to a press release. Following criticisms that Hendrycks’ startup could stand to gain if the bill passes, potentially as one of the auditors SB 1047 requires developers to hire, he divested his equity stake in Gray Swan.

“I divested in order to send a clear signal,” said Hendrycks in an email to TechCrunch. “If the billionaire VC opposition to commonsense AI safety wants to show their motives are pure, let them follow suit.”

After several of Anthropic’s suggested amendments were added to SB 1047, CEO Dario Amodei issued a letter saying the bill’s “benefits likely outweigh its costs.” It’s not an endorsement, but it’s a lukewarm signal of support. Shortly after that, Elon Musk signaled he was in favor of the bill.

What do opponents say?

A growing chorus of Silicon Valley players oppose SB 1047.

Hendrycks’ “billionaire VC opposition” likely refers to a16z, the venture firm founded by Marc Andreessen and Ben Horowitz, which has strongly opposed SB 1047. In early August, the venture firm’s chief legal officer, Jaikumar Ramaswamy, submitted a letter to Senator Wiener, claiming the bill “will burden startups because of its arbitrary and shifting thresholds,” creating a chilling effect on the AI ecosystem. As AI technology advances, it will get more expensive, meaning that more startups will cross that $100 million threshold and will be covered by SB 1047; a16z says several of their startups already receive that much for training models.

Fei-Fei Li, often called the godmother of AI, broke her silence on SB 1047 in early August, writing in a Fortune column that the bill will “harm our budding AI ecosystem.” While Li is a well-regarded pioneer in AI research from Stanford, she also reportedly created an AI startup called World Labs in April, valued at a billion dollars and backed by a16z.

She joins influential AI academics such as fellow Stanford researcher Andrew Ng, who called the bill “an assault on open source” during a speech at a Y Combinator event in July. Open source models may create additional risk for their creators, since like any open software, they are more easily modified and deployed to arbitrary and potentially malicious purposes.

Meta’s chief AI scientist, Yann LeCun, said SB 1047 would hurt research efforts, and is based on an “illusion of ‘existential risk’ pushed by a handful of delusional think-tanks,” in a post on X. Meta’s Llama LLM is one of the foremost examples of an open source LLM.

Startups are also not happy about the bill. Jeremy Nixon, CEO of AI startup Omniscience and founder of AGI House SF, a hub for AI startups in San Francisco, worries that SB 1047 will crush his ecosystem. He argues that bad actors should be punished for causing critical harms, not the AI labs that openly develop and distribute the technology.

“There is a deep confusion at the center of the bill, that LLMs can somehow differ in their levels of hazardous capability,” said Nixon. “It’s more than likely, in my mind, that all models have hazardous capabilities as defined by the bill.”

OpenAI opposed SB 1047 in late August, noting that national security measures related to AI models should be regulated at the federal level. They’ve supported a federal bill that would do so.

But Big Tech, which the bill directly focuses on, is panicked about SB 1047 as well. The Chamber of Progress — a trade group representing Google, Apple, Amazon and other Big Tech giants — issued an open letter opposing the bill saying SB 1047 restrains free speech and “pushes tech innovation out of California.” Last year, Google CEO Sundar Pichai and other tech executives endorsed the idea of federal AI regulation.

U.S. Congressman Ro Khanna, who represents Silicon Valley, released a statement opposing SB 1047 in August. He expressed concerns the bill “would be ineffective, punishing of individual entrepreneurs and small businesses, and hurt California’s spirit of innovation.” He’s since been joined by speaker Nancy Pelosi and the United States Chamber of Commerce, who have also said the bill would hurt innovation.

Silicon Valley doesn’t traditionally like when California sets broad tech regulation like this. In 2019, Big Tech pulled a similar card when another state privacy bill, California’s Consumer Privacy Act, also threatened to change the tech landscape. Silicon Valley lobbied against that bill, and months before it went into effect, Amazon founder Jeff Bezos and 50 other executives wrote an open letter calling for a federal privacy bill instead.

What happens next?

SB 1047 currently sits on California Governor Gavin Newsom’s desk where he will ultimately decide whether to sign the bill into law before the end of August. Wiener says he has not spoken to Newsom about the bill, and does not know his position.

This bill would not go into effect immediately, as the Board of Frontier Models is set to be formed in 2026. Further, if the bill does pass, it’s very likely to face legal challenges before then, perhaps from some of the same groups that are speaking up about it now.

Correction: This story originally referenced a previous draft of SB 1047’s language around who is responsible for fine-tuned models. Currently, SB 1047 says the developer of a derivative model is only responsible for a model if they spend three times as much as the original model developer did on training.

More TechCrunch

A top court in Brazil ordered an immediate, country-wide suspension of the X platform on Friday after a months-long legal battle with Elon Musk’s social media company over content moderation,…

Top court orders ban on Elon Musk’s X in Brazil

OpenAI is in talks to raise a new round of funding at an eye-popping $100 billion-plus valuation, sources told The Wall Street Journal this week. It turns out investors have…

Investors are already valuing OpenAI at over $100B on the secondaries market

SB 1047 has drawn the ire of Silicon Valley players large and small, including venture capitalists, big tech trade groups, researchers and startup founders.

California’s legislature just passed AI bill SB 1047; here’s why some hope the governor won’t sign it
Image Credits: Bryce Durbin

Stock-trading startup Public has acquired the brokerage accounts of Carta’s secondaries business, TechCrunch has confirmed.

Carta’s ill-fated secondaries business finally found a buyer

Redfin is responding to a new startup that is hoping to upend the way people search for and buy homes by offering a flat-fee service. On August 29, TechCrunch reported…

Redfin is already trying to defend against a new flat-fee real estate startup

Apple moved to terminate the Appstun developer account after multiple rejections of its app that Apple says violates its App Store guidelines.

Apple stands by decision to terminate account belonging to WWDC student winner

By Rotation has a partnership with Airbnb to let those who’ve booked an Airbnb receive a complimentary outfit rental.

Airbnb and fashion app By Rotation partner for free destination wedding outfits

LAION, the German research org that created the data used to train Stable Diffusion, among other generative AI models, has released a new dataset that it claims has been “thoroughly…

The org behind the dataset used to train Stable Diffusion claims it has removed CSAM

The North Korean hackers’ attack started by tricking a victim into visiting a web domain under the hackers’ control.

North Korean hackers exploited Chrome zero-day to steal crypto

Some stories emerge and die in a matter of days. Others require us to stay tuned for more, and this week brought us several of these.

Fundraising is a lot easier when you have traction

Google is gearing up for the upcoming U.S. presidential election by rolling out safeguards for more of its generative AI products. Although the company already previously announced that it would…

Google rolls out safeguards for more of its AI products ahead of the US presidential election

Across the world, regulators have ramped up their efforts to try and increase the safety of kids on the internet. Major social networks are facing scrutiny, and as a countermeasure,…

Hello Wonder is building an AI-powered browser for kids

Jam & Tea Studios is the latest gaming startup implementing generative AI to transform the way players interact with non-playable characters (NPCs) in video games.  Traditionally, video game NPCs are…

Former Riot Games employees leverage generative AI to power NPCs in new video game

Traditionally seen as private financial entities, family offices are key players in the supply of venture capital, using startup investments as a way to diversify their portfolios and engage with…

Elle Family Office and Keebeck Wealth Management are coming to TechCrunch Disrupt 2024

TechCrunch Disrupt 2024 in San Francisco is just two months away, and we’re still looking for enthusiastic and driven volunteers to assist our events team. Don’t miss this opportunity to…

Be a volunteer at TechCrunch Disrupt 2024

Don’t miss out! Today is the last day to apply and scale your Series A to B startup at a significantly reduced exhibit cost with the ScaleUp Startup Exhibitor Package.…

Last Day: Exhibit your startup with big savings at TechCrunch Disrupt 2024

Indian tech and media startup VerSe, which operates popular news aggregator Dailyhunt, is worth about 42% below its last private valuation, according to estimates by its investor 360 One.  The…

Dailyhunt parent VerSe’s valuation gets slashed 42% to $2.9B: investor note

After seeing double-digit growth in South Korea, Uber Technologies has announced a strategic plan to double down in the country — directly challenging market leader Kakao Mobility, the ride-hailing unit…

Uber drives deeper into South Korea to take on Kakao Mobility

TikTok is introducing a new “Manage Topics” feature that will give you more control over what you see on your For You feed, the company announced on Friday. The new…

TikTok’s new ‘Manage Topics’ tool gives you more control over your For You feed; here’s how to use it

Given everything you’ve already heard about AI, you may not be surprised to learn that Google is among other outfits beginning to use sound signals to predict early signs of…

Google is working on AI that can hear signs of sickness

Nvidia and Apple are reportedly in talks to contribute to OpenAI’s next fundraising round — a round that could value the ChatGPT maker at $100 billion. Per its sources, The…

Apple and Nvidia could be OpenAI’s next big investors

Agrim has raised $17.3 million to expand its B2B agri-inputs platform to more manufacturers and retailers in India.

India’s Agrim snags $17.3M to help farmers get inputs like seeds and pesticides more easily

Intuitive Machines, the venture-backed startup that went public last year, will send a moon lander to the lunar south pole in 2027 as part of a $116.9 million contract awarded…

Intuitive Machines wins $116.9M contract for a moon mission in 2027

Many tech companies are expanding their reach into the web3 market, integrating blockchain and web3 technologies into their products and services. In the latest development, South Korean internet giant Naver…

South Korean tech giant Naver launches crypto wallet in partnership with Chiliz

Atlassian plans to integrate Rewatch into its recently launched Rovo AI platform so that transcripts become searchable within the overall business context.

Atlassian acquires Rewatch as it gets into AI meeting bots

Sub.club thinks premium feeds could also serve other use cases, like supporting helpful bots or generating funds to help maintain a community’s Mastodon server, for instance.

Sub.club aims to fund the fediverse via premium feeds

Gmail users on Android devices can now chat directly with Google’s AI assistant, Gemini, about their emails in the Gmail app. Google rolled out the new feature, Gmail Q&A, on…

Gmail users on Android can now chat with Gemini about their emails

It seems that the Ministry of Truth has been busy at Tesla. Some sharp-eyed folks, including reporters at Electrek, noticed that Tesla has deleted all of its blog posts prior…

Tesla keeps putting its digital history in the memory hole

When streaming to connected devices via Spotify Connect on iOS, users were previously able to use the physical buttons on their iPhone to adjust the volume. But this will no…

Spotify points finger at Apple over an unwelcome change to volume control technology

Magic, an AI startup creating models to generate code and automate a range of software development tasks, has raised a large tranche of cash from investors, including ex-Google CEO Eric…

Generative AI coding startup Magic lands $320M investment from Eric Schmidt, Atlassian and others