-->

Register Now to SAVE BIG & Join Us for Enterprise AI World 2025, November 19-20, in Washington, DC

California Becomes First State to Pass Landmark AI Regulation into Law

Article Featured Image

Last week, California Governor Gavin Newsom signed the nation’s first AI safety regulation, SB 53, into law. The Transparency in Frontier Artificial Intelligence Act (TFAIA), establishes public safety regulations for developers of “frontier models,” or large foundation AI models trained using massive amounts of computing power. TFAIA is the first frontier model safety legislation in the country to become law.  

Governor Newsom stated that TFAIA will “provide a blueprint for well-balanced AI policies beyond [California’s] borders—especially in the absence of a comprehensive federal AI policy framework and national AI safety standards.” 

TFAIA mostly adopts the recommendations of the Joint California Policy Working Group on AI Frontier Models, which released its final report on frontier AI policy in June.

Effective January 1, 2026, TFAIA will apply to “frontier developers” who have trained, or initiated the training of, a foundation model using a quantity of computing power greater than 1026 FLOPS (a “frontier model”), with additional requirements for frontier developers with annual gross revenues exceeding $500 million (large frontier developers). 

Notably, starting on January 1, 2027, TFAIA will require the California Department of Technology to annually provide recommendations to the Legislature on “whether and how to update” TFAIA’s definitions of “frontier model,” “frontier developer,” and “large frontier developer” to “ensure that they accurately reflect technological developments, scientific literature, and widely accepted national and international standards.” 

TFAIA will require a large frontier developer to create, implement, and publish a “frontier AI framework,” which is defined as “documented technical and organizational protocols to manage, assess, and mitigate catastrophic risks.”  Such frameworks must explain the developer’s approaches to:

  • Integration of Standards:  Incorporating “national standards, international standards, and industry-consensus best practices.”
  • Risk Thresholds and Mitigation:  Defining and assessing “thresholds used … to identify and assess whether a frontier model has capabilities that could pose a catastrophic risk” and applying “mitigations to address the potential for catastrophic risks” based on those assessments.
  • Pre-Deployment Assessments:  Reviewing assessments and the adequacy of mitigations before deploying a frontier model externally or for “extensive internal” use, and using third parties to assess catastrophic risks and mitigations,
  • Framework Maintenance:  Revisiting and updating the developer’s frontier AI framework, including criteria for triggering such updates, and defining when models are “substantially modified enough to require” publishing transparency reports required by TFAIA (described further below).
  • Security and Incident Response:  Implementing “cybersecurity practices to secure unreleased model weights” and processes for “identifying and responding to critical safety incidents.”
  • Internal Use Risk Management:  Assessing and managing “catastrophic risk resulting from the internal use” of the developer’s frontier model, including risks resulting from the model “circumventing oversight mechanisms.”

Large frontier developers must review and update their Frontier AI Frameworks at least annually and publish any “material modifications” with a justification within 30 days if such a modification is made.

A large frontier developer that violates TFAIA’s disclosure and reporting requirements, or that “fails to comply with its own frontier AI framework,” will be subject to civil penalties of up to $1 million per violation, enforced by the California Attorney General. 

Additionally, the bill makes it easier for both members of the public and company whistleblowers to report potential safety risks. The law requires the state’s Office of Emergency Services to set up a mechanism for members of the public to report critical safety incidents. And it prohibits companies from adopting policies or otherwise restricting or retaliating against staff for disclosing information they have “reasonable cause” to believe reveals that a developer poses “specific and substantial danger to the public health or safety resulting from a catastrophic risk.”

The law also calls for the creation of a consortium that will be tasked with developing a “cloud computing cluster,” known as “CalCompute,” that will advance the “development and deployment of artificial intelligence that is safe, ethical, equitable, and sustainable.”

“With a technology as transformative as AI, we have a responsibility to support that innovation while putting in place commonsense guardrails to understand and reduce risk,” California State Senator Scott Wiener, who introduced the bill, said in a statement. “With this law, California is stepping up, once again, as a global leader on both technology innovation and safety.”

Tech innovation groups and leaders had mixed reactions to the legislation.

Collin McCune, head of government affairs at Andreessen Horowitz (a16z), said that while SB53 “includes some thoughtful provisions that account for the distinct needs of startups … it misses an important mark by regulating how the technology is developed—a move that risks squeezing out startups, slowing innovation, and entrenching the biggest players.” McCune also seemed to note the prospect that federal lawmakers could ultimately preempt the California standards.

According to Forbes, Anthropic has endorsed SB53 since before its signing, stating that the law’s “transparency requirements will have an important impact on frontier AI safety. Without it, labs with increasingly powerful models could face growing incentives to dial back their own safety and disclosure programs in order to compete.”

Go here to read the law in full.

EAIWorld Cover
Free
for qualified subscribers
Subscribe Now Current Issue Past Issues