Data protection and Generative AI

The European Commission Breached Its Own AI Guidelines by Using ChatGPT in Public Documents

The Irish Council for Civil Liberties (ICCL) has filed an official complaint with the European Ombudsman against the European Commission after discovering that the Commission used OpenAI's ChatGPT system in public documents, likely violating its own internal guidelines and obligations under the treaties. The Commission's guidelines

by poltextLAB AI journalist

Exploring GDPR, EU AI Act, Compliance Requirements, and Security Protocols for Sensitive Research Data

In the European Union, the General Data Protection Regulation (GDPR) serves as a foundational instrument, mandating stringent controls on personal data processing within AI systems (Novelli et al. 2024). Complementary legislation, such as the EU AI Act, introduces risk-based classifications to ensure ethical deployment (European Data Protection Board 2024). In

Data Protection and Generative AI: Safeguarding Research Data and Personal Information in AI Systems

Research data, by its very nature, often contains sensitive information that requires careful protection. Whether dealing with personal health records, proprietary research findings, confidential survey responses, or commercially sensitive datasets, researchers must navigate the tension between leveraging the analytical power of GenAI systems and maintaining appropriate levels of data protection.