California lawmaker behind SB 1047 reignites push for mandated AI safety reports

1 day ago 1

California State Senator Scott Wiener on Wednesday introduced new amendments to his latest bill, SB 53, that would require the world’s largest AI companies to publish safety and security protocols and issue reports when safety incidents occur.

If signed into law, California would be the first state to impose meaningful transparency requirements onto leading AI developers, likely including OpenAI, Google, Anthropic, and xAI.

Senator Wiener’s previous AI bill, SB 1047, included similar requirements for AI model developers to publish safety reports. However, Silicon Valley fought ferociously against that bill, and it was ultimately vetoed by Governor Gavin Newsom. California’s governor then called for a group of AI leaders — including the leading Stanford researcher and co-founder of World Labs, Fei-Fei Li — to form a policy group and set goals for the state’s AI safety efforts.

California’s AI policy group recently published their final recommendations, citing a need for “requirements on industry to publish information about their systems” in order to establish a “robust and transparent evidence environment.” Senator Wiener’s office said in a press release that SB 53’s amendments were heavily influenced by this report.

“The bill continues to be a work in progress, and I look forward to working with all stakeholders in the coming weeks to refine this proposal into the most scientific and fair law it can be,” Senator Wiener said in the release.

SB 53 aims to strike a balance that Governor Newsom claimed SB 1047 failed to achieve — ideally, creating meaningful transparency requirements for the largest AI developers without thwarting the rapid growth of California’s AI industry.

“These are concerns that my organization and others have been talking about for a while,” said Nathan Calvin, VP of State Affairs for the nonprofit AI safety group, Encode, in an interview with TechCrunch. “Having companies explain to the public and government what measures they’re taking to address these risks feels like a bare minimum, reasonable step to take.”

The bill also creates whistleblower protections for employees of AI labs who believe their company’s technology poses a “critical risk” to society — defined in the bill as contributing to the death or injury of more than 100 people, or more than $1 billion in damage.

Additionally, the bill aims to create CalCompute, a public cloud computing cluster to support startups and researchers developing large-scale AI.

Unlike SB 1047, Senator Wiener’s new bill does not make AI model developers liable for the harms of their AI models. SB 53 was also designed not to pose a burden on startups and researchers that fine-tune AI models from leading AI developers, or use open source models.

With the new amendments, SB 53 is now headed to the California State Assembly Committee on Privacy and Consumer Protection for approval. Should it pass there, the bill will also need to pass through several other legislative bodies before reaching Governor Newsom’s desk.

On the other side of the U.S., New York Governor Kathy Hochul is now considering a similar AI safety bill, the RAISE Act, which would also require large AI developers to publish safety and security reports.

The fate of state AI laws like the RAISE Act and SB 53 were briefly in jeopardy as federal lawmakers considered a 10-year AI moratorium on state AI regulation — an attempt to limit a “patchwork” of AI laws that companies would have to navigate. However, that proposal failed in a 99-1 Senate vote earlier in July.

“Ensuring AI is developed safely should not be controversial — it should be foundational,” said Geoff Ralston, the former president of Y Combinator, in a statement to TechCrunch. “Congress should be leading, demanding transparency and accountability from the companies building frontier models. But with no serious federal action in sight, states must step up. California’s SB 53 is a thoughtful, well-structured example of state leadership.”

Up to this point, lawmakers have failed to get AI companies on board with state-mandated transparency requirements. Anthropic has broadly endorsed the need for increased transparency into AI companies, and even expressed modest optimism about the recommendations from California’s AI policy group. But companies such as OpenAI, Google, and Meta have been more resistant to these efforts.

Leading AI model developers typically publish safety reports for their AI models, but they’ve been less consistent in recent months. Google, for example, decided not to publish a safety report for its most advanced AI model ever released, Gemini 2.5 Pro, until months after it was made available. OpenAI also decided not to publish a safety report for its GPT-4.1 model. Later, a third-party study came out that suggested it may be less aligned than previous AI models.

SB 53 represents a toned-down version of previous AI safety bills, but it still could force AI companies to publish more information than they do today. For now, they’ll be watching closely as Senator Wiener once again tests those boundaries.

Maxwell Zeff is a senior reporter at TechCrunch specializing in AI. Previously with Gizmodo, Bloomberg, and MSNBC, Zeff has covered the rise of AI and the Silicon Valley Bank crisis. He is based in San Francisco. When not reporting, he can be found hiking, biking, and exploring the Bay Area’s food scene.

Read Entire Article