California Governor Gavin Newsom signed into law a bill requiring OpenAI and other leading AI developers to disclose how they plan to mitigate potential catastrophic risks from their most advanced models. This marks the first state-level legislation in the United States mandating regular AI safety disclosures, highlighting growing regulatory attention as artificial intelligence technologies advance rapidly.
The law specifically targets “frontier AI models,” meaning the most powerful systems trained with the largest computational resources. Affected companies must submit annual reports detailing how they address risks such as disinformation, cyberattacks, and the dangers associated with autonomous decision-making. The law will have significant impact on OpenAI, Google DeepMind and Anthropic, all of which are headquartered or largely operate in California.
Hunton Andrews Kurth’s analysis notes that the legislation could represent a breakthrough in AI oversight by clearly assigning responsibility to companies for safeguarding society. The long-term significance lies in its potential to set a precedent for other U.S. states and even federal regulators, who are still debating the contours of AI governance. California once again positions itself as a pioneer in digital regulation, following its earlier leadership in data protection and consumer rights.
Sources:
1.
2.

3.

