In October 2025, California passed two laws regulating the use of AI companion chatbots, becoming the first U.S. state to establish a legal framework for these applications. The new rules require developers to conduct mandatory safety checks and to report any major safety incidents to the state. The regulations aim to protect users, particularly in situations where chatbots provide psychological support or social interaction.
The legislation, signed into law by Governor Gavin Newsom on 7 October 2025, consists of two separate bills. One mandates risk assessment protocols and mechanisms to prevent misuse, while the other requires that all serious safety issues be reported to California’s state authorities. The background to this move lies in the rapid spread of AI companion chatbots, which are increasingly used in services linked to mental health and social engagement. Political analysts note that the decision positions California at the forefront of ethical and safe AI deployment.
The long-term significance of the measures is that California sets a precedent for other U.S. states, which are expected to consider similar regulations. The decision not only imposes new obligations on AI developers but also strengthens the importance of user safety and transparency. By mandating safety audits and incident reporting, the state has marked a key turning point in the societal embedding of generative AI.
Sources:
1.

2.
3.

