In February 2025, the OECD released its report titled "Towards a Common Reporting Framework for AI Incidents", which proposes a unified international system for reporting and monitoring artificial intelligence-related events. This initiative responds to growing risks such as discrimination, data protection violations, and security issues.
The report defines an AI incident as an event where an AI system's development, use, or malfunction directly or indirectly leads to harm and further defines AI risk as a potential incident precursor. The framework was developed using Four sources: the OECD AI Systems Classification Framework, the Responsible AI Collaboration's AI Incidents Database (AIID), the OECD Global Product Recalls Portal, and the OECD AI Incidents Monitor (AIM). The report establishes 29 criteria for reporting incidents, which are categorised into eight groups: incident metadata, details of damages incurred, and the economic environment. Seven of these criteria are mandatory to provide. The report states that the framework provides a flexible structure for reporting and monitoring AI incidents, partly through its seven mandatory criteria.
The common reporting framework aims to enhance interoperability in AI incident reporting while complementing national policies and regulatory measures. Countries can apply the common approach whilst receiving flexibility in responding according to their own national policies. The framework assists decision-makers in identifying high-risk systems, understanding current and future risks, and evaluating impacts on affected stakeholders whilst promoting information sharing across different jurisdictions without infringing data protection, intellectual property, or security laws.
Sources:
1.

2.
3.
