On 19th June 2025, the New York State Legislature passed the Responsible AI Evaluation for Frontier Models Act (RAISE Act), the first comprehensive state-level AI safety regulation in the United States, which applies to high-capability AI systems defined as 'frontier models' that may have potentially harmful impacts and contain at least 5 billion parameters. The bill, introduced by Senator Andrew Gounardes, establishes stringent evaluation requirements, compelling developers to prove the safety of their high-impact AI systems before selling or deploying them in New York State, while addressing key concerns such as system safety, cybersecurity vulnerabilities, and risks of misinformation.
The RAISE Act imposes three main obligations on frontier model developers: conducting comprehensive evaluations of their models based on methodology from the newly established AI Evaluation Center at New York University, making public statements about evaluation results, and demonstrating they have taken appropriate measures to mitigate risks related to system safety, cybersecurity, and fraud and misinformation. Significant penalties are expected for violations of the law: $10,000 for the first offence and up to $50,000 for repeated infractions, enforced by the New York State Attorney General. The regulation will take effect by the end of the year, with companies having a 180-day transition period to ensure compliance, while the law explicitly exempts models used for research and development purposes.
The New York regulation sets a significant precedent for other states and complements the White House's October 2023 Executive Order, which established federal-level safety standards in AI development. The AI Evaluation Center established at New York University will play a crucial role in implementing the law, with initial funding of $25 million and tasked with developing evaluation methodologies for frontier models and publishing results. Industry reactions have been mixed: while Microsoft, OpenAI, and Anthropic support the regulation, smaller AI ventures have expressed concerns about compliance costs, highlighting the challenges of regulatory balance between ensuring AI safety and encouraging innovation.
Sources:


