On Monday, Anthropic announced its official endorsement of SB 53, a California bill introduced by Senator Scott Wiener that would impose first-of-its-kind transparency requirements on major AI developers. The move marks a rare and significant win for the bill, which faces heavy opposition from industry lobbying groups like the Consumer Technology Association and Chamber for Progress.
In a blog post, Anthropic said while it prefers a federal framework for AI safety, progress cannot wait:
“The question isn’t whether we need AI governance, it’s whether we’ll develop it thoughtfully today or reactively tomorrow. SB 53 offers a solid path toward the former.”
If passed, SB 53 would require companies like OpenAI, Anthropic, Google, and xAI to create safety frameworks, release public safety reports before deploying powerful AI models, and provide whistleblower protections. The bill targets “catastrophic risks,” such as AI enabling bioweapon development or cyberattacks, rather than concerns like deepfakes.
The California Senate has already approved a prior version, but the bill still needs a final vote before heading to Governor Gavin Newsom’s desk. Newsom has not commented on SB 53, though he previously vetoed another AI bill from Wiener, SB 1047.
SB 53 has drawn criticism from Silicon Valley and the Trump administration, with opponents arguing it could harm U.S. innovation and encroach on federal authority. Andreessen Horowitz and Y Combinator led opposition to SB 1047, while some industry leaders argue such state-level laws risk violating the Commerce Clause.
Still, experts believe SB 53 is more measured. Dean Ball, a senior fellow at the Foundation for American Innovation, said the bill shows “legislative restraint” and a respect for technical realities that previous drafts lacked.
Related: Anthropic Now Requires Claude Users to Opt Out of Data Training by September 28
Most AI companies already publish safety reports, but compliance is voluntary. SB 53 would make those practices state law, with financial penalties for violations. A previous requirement for third-party audits was dropped after industry pushback.
Anthropic co-founder Jack Clark reinforced the urgency in a post on X:
“We have long said we would prefer a federal standard. But in the absence of that this creates a solid blueprint for AI governance that cannot be ignored.”