New York Passes the RAISE Act: New Regulations for AI Developers

New York Passes the RAISE Act: New Regulations for AI Developers
Source. unsplash - Redd Francisco

On 19th June 2025, the New York State Legislature passed the Responsible AI Evaluation for Frontier Models Act (RAISE Act), the first comprehensive state-level AI safety regulation in the United States, which applies to high-capability AI systems defined as 'frontier models' that may have potentially harmful impacts and contain at least 5 billion parameters. The bill, introduced by Senator Andrew Gounardes, establishes stringent evaluation requirements, compelling developers to prove the safety of their high-impact AI systems before selling or deploying them in New York State, while addressing key concerns such as system safety, cybersecurity vulnerabilities, and risks of misinformation.

The RAISE Act imposes three main obligations on frontier model developers: conducting comprehensive evaluations of their models based on methodology from the newly established AI Evaluation Center at New York University, making public statements about evaluation results, and demonstrating they have taken appropriate measures to mitigate risks related to system safety, cybersecurity, and fraud and misinformation. Significant penalties are expected for violations of the law: $10,000 for the first offence and up to $50,000 for repeated infractions, enforced by the New York State Attorney General. The regulation will take effect by the end of the year, with companies having a 180-day transition period to ensure compliance, while the law explicitly exempts models used for research and development purposes.

The New York regulation sets a significant precedent for other states and complements the White House's October 2023 Executive Order, which established federal-level safety standards in AI development. The AI Evaluation Center established at New York University will play a crucial role in implementing the law, with initial funding of $25 million and tasked with developing evaluation methodologies for frontier models and publishing results. Industry reactions have been mixed: while Microsoft, OpenAI, and Anthropic support the regulation, smaller AI ventures have expressed concerns about compliance costs, highlighting the challenges of regulatory balance between ensuring AI safety and encouraging innovation.

Sources:

New York State passes RAISE Act for frontier AI models - The Economic Times
New York lawmakers have approved the Responsible AI Safety and Education (RAISE) Act, mandating transparency and safety measures for frontier AI models. Supported by AI experts, the Act requires AI labs to release safety reports and report incidents, with penalties for non-compliance. India’s AI adoption is growing, prompting increased demand for AI trust and safety professionals.
Sen. Gounardes’ AI Safety Bill Passes the State Senate
FOR IMMEDIATE RELEASE: JUNE 12, 2025New York State Senator Andrew Gounardes issued the following statement after his RAISE Act passed the State Senate:“Would you let your child ride in a car with no seatbelt or airbags? Of course not. So why would you let them use an incredibly powerful AI without basic safeguards in place?
New York Seeks to RAISE the Bar on AI Regulation
New York state lawmakers on June 12, 2025 passed the Responsible AI Safety and Education Act (the RAISE Act), which aims to safeguard against artificial intelligence (AI)-driven disaster scenarios by focusing on the largest AI model developers; the bill now heads to the governor’s desk for final approval. The RAISE Act is the latest legislative movement at the state level seeking to regulate AI, a movement that may continue to gain momentum after a 10-year moratorium on AI regulation was removed from the recently passed One Big Beautiful Bill.