OECD Introduces Common Reporting System for AI Incidents

OECD Introduces Common Reporting System for AI Incidents
Source: Freepik - rawpixel.com

In February 2025, the OECD released its report titled "Towards a Common Reporting Framework for AI Incidents", which proposes a unified international system for reporting and monitoring artificial intelligence-related events. This initiative responds to growing risks such as discrimination, data protection violations, and security issues.

The report defines an AI incident as an event where an AI system's development, use, or malfunction directly or indirectly leads to harm and further defines AI risk as a potential incident precursor. The framework was developed using Four sources: the OECD AI Systems Classification Framework, the Responsible AI Collaboration's AI Incidents Database (AIID), the OECD Global Product Recalls Portal, and the OECD AI Incidents Monitor (AIM). The report establishes 29 criteria for reporting incidents, which are categorised into eight groups: incident metadata, details of damages incurred, and the economic environment. Seven of these criteria are mandatory to provide. The report states that the framework provides a flexible structure for reporting and monitoring AI incidents, partly through its seven mandatory criteria.

The common reporting framework aims to enhance interoperability in AI incident reporting while complementing national policies and regulatory measures. Countries can apply the common approach whilst receiving flexibility in responding according to their own national policies. The framework assists decision-makers in identifying high-risk systems, understanding current and future risks, and evaluating impacts on affected stakeholders whilst promoting information sharing across different jurisdictions without infringing data protection, intellectual property, or security laws.

Sources:

1.

OECD Logo
Towards a Common Reporting Framework for AI Incidents
This OECD report proposes a unified approach to reporting AI incidents, aiming to help policymakers understand AI incidents across diverse contexts, identify high-risk systems, and assess current and potential risks associated with AI technologies. :contentReference[oaicite:4]{index=4}

2.

DataGuidance
Essential Privacy and Regulatory Research at Your Fingertips. Find everything you need to stay up-to-date on evolving privacy & security regulations around the world

3.

OECD incident report: definitions for AI incidents and related terms
As AI gets more widely used across industries, the potential for AI systems to cause harm - whether unintentional bugs, misuse, or malicious attacks - also increases. Definitions help identify and prevent such incidents. By providing common definitions of AI incidents, hazards, etc., this allows the UK tech industry, regulators, and others to align on terminology. This shared understanding facilitates cross-organisation and cross-border learning from AI incidents.