Comparison of the EU AI Act and the GDPR: Fundamental Differences and Synergies

Comparison of the EU AI Act and the GDPR: Fundamental Differences and Synergies
Source: Freepik - vladimirpolikarpov

The European Parliament adopted the Artificial Intelligence Regulation (EU AI Act) on March 13, 2024, which works closely with the General Data Protection Regulation (GDPR) to regulate the development and use of AI systems in the EU. While GDPR primarily focuses on fundamental rights and personal data protection, the EU AI Act serves broader objectives, including aspects of health, safety, democracy, rule of law, and environmental protection.

The most significant differences between the two regulations lie in their risk management approaches. The EU AI Act introduces a four-level risk categorization (prohibited, high, limited, and minimal risk systems), while the GDPR establishes more general requirements. Regarding sanctions, the EU AI Act is stricter, allowing fines of up to 7% of annual turnover or €35 million, compared to GDPR's upper limit of 4% or €20 million. The EU AI Act pays special attention to human oversight: Article 14 requires high-risk AI systems to be designed so that they can be effectively supervised by natural persons. In contrast, Article 22 of the GDPR prohibits solely automated decision-making in cases with significant legal effects. The scope of the regulations also differs: while GDPR exclusively regulates personal data protection, the EU AI Act applies even when no personal data processing occurs, though both regulations have an extraterritorial effect for organizations outside the EU.

Both regulations employ similar tools to proving compliance: the EU AI Act prescribes conformity assessment and Fundamental Rights Impact Assessment (FRIA) for high-risk systems. FRIA aims to ensure that the AI system in question does not violate fundamental rights, such as the right to privacy, non-discrimination, or the right to due process. During this process, the potential impacts of the system are analyzed, along with whether adequate safeguards are in place to prevent legal harm. In parallel, the GDPR requires a Data Protection Impact Assessment (DPIA) for high-risk data processing operations. DPIA aims to identify and manage risks arising from data processing procedures, particularly regarding the rights and freedoms of data subjects. This is especially important when new technology is applied, large-scale personal data processing occurs, or when the rights of data subjects are exposed to greater risk.

The principles of transparency and accountability play central roles in both regulations: Article 13 of the EU AI Act contains detailed requirements for the transparency of high-risk AI systems. In contrast, Articles 13 and 15 of the GDPR regulate data subjects' information and access rights. Regarding supervisory structure, each Member State must designate national authorities to monitor compliance with the EU AI Act, supported by the European Artificial Intelligence Board and the European AI Office, similar to the GDPR's data protection authorities and the European Data Protection Board.

Sources:

Europe: The EU AI Act’s relationship with data protection law: key takeaways
Disclaimer: The blogpost below is based on a previously published Thomson Reuters Practical Law practice note (EU AI Act: data protection aspects (EU))
GDPR and AI Act: similarities and differences | activeMind.legal
The GDPR and the AI Act are similar in many respects, but also have some important differences that companies should be aware of.
Top 10 operational impacts of the EU AI Act – Leveraging GDPR compliance
This installment in the IAPP’s article series on the EU AI Act provides insights on leveraging GDPR compliance.