EU AI Safety Code Mandates Assessment of Four Critical Risks

EU AI Safety Code Mandates Assessment of Four Critical Risks
Sources: Unsplash - ALEXANDRE LALLEMAND

The EU AI Code of Practice establishes a significant new benchmark for AI safety practices, setting mandatory requirements for models exceeding 10^25 FLOPs. The European Commission released the three chapters of the General Purpose AI Code of Practice on July 10, 2025, as a voluntary tool to help industry comply with the AI Act rules, which come into effect on August 2, 2025. The regulation applies to general purpose AI (GPAI) models with computing power above 10^23 FLOPs, but models posing systemic risks—those trained with more than 10^25 FLOPs, such as GPT-4, Gemini 1.5 Pro, Grok 3, and Claude 3.7 Sonnet—must meet additional safety and cybersecurity requirements. The Code was developed by 13 independent experts with input from over 1,000 stakeholders, including model providers, SMEs, academics, AI safety experts, and civil society organisations.

The safety and security chapter is the most extensive of the three, detailing the risk management process providers must implement. The chapter mandates that providers must always assess four specified risks: chemical, biological, radiological, and nuclear threats; loss of control risk; cyber offense risk; and potential for harmful manipulation. During the risk analysis process, providers must collect information about each risk from various sources and carry out state-of-the-art model evaluations, then compare these risk estimates against predefined acceptance criteria. A recent assessment found current practices to be lacking, with less than half of reviewed firms reporting substantive testing for dangerous capabilities linked to large-scale risks.

The Code is expected to have global impact, as the high training costs for models posing systemic risks make it unlikely providers will develop different models for different jurisdictions. While adherence to the Code is voluntary, the presumption of conformity offers strong incentive for providers to sign, providing reduced administrative burden and increased legal certainty. While OpenAI, Google, Mistral AI, and Anthropic have voiced support, Meta has already stated it will not sign the Code. However, regardless of signing the Code, compliance with the AI Act is mandatory, making the Code a useful signal for what level of compliance might be expected.

Sources:

1.

General-Purpose AI Code of Practice now available
The European Commission has received the final version of the General-Purpose AI Code of Practice, a voluntary tool developed by 13 independent experts, with input from over 1,000 stakeholders, including model providers, small and medium-sized enterprises, academics, AI safety experts, rightsholders, and civil society organisations.

2.

AI Safety under the EU AI Code of Practice — A New Global Standard? | Center for Security and Emerging Technology
To protect Europeans from the risks posed by artificial intelligence, the EU passed its AI Act last year. This month, the EU released a Code of Practice to help providers of general purpose AI comply with the AI Act. This blog reviews the measures set out in the new Code’s safety and security chapter, assesses how they compare to existing practices, and what the Code’s global impact might be.