Ethical and Responsible Use of Generative AI in Research: Overview of EU and International Guidelines

Ethical and Responsible Use of Generative AI in Research: Overview of EU and International Guidelines
Source: Freepik - sersupervector

The integration of generative artificial intelligence (GenAI) into research environments has fundamentally transformed how scholars approach knowledge creation, data analysis, and academic writing. As these powerful technologies become increasingly sophisticated and accessible, they offer unprecedented opportunities for enhancing research productivity and enabling novel discoveries. However, their deployment simultaneously introduces complex ethical challenges that demand careful consideration and robust governance frameworks (European Commission 2025). The ethical landscape surrounding GenAI in research extends beyond traditional research integrity concerns. Whilst foundational principles such as accountability, and transparency remain central, the unique characteristics of generative AI systems—including their probabilistic nature, potential for hallucination, and capacity for sophisticated content generation—necessitate new ethical frameworks specifically tailored to these technologies (Farangi et al. 2024). The stakes are particularly high in research contexts, where the integrity of knowledge production and the credibility of scientific institutions depend upon maintaining rigorous ethical standards. Recent scholarship has highlighted the multifaceted nature of these ethical considerations. Hagendorff's comprehensive scoping review identified 378 distinct normative issues across 19 topic areas, demonstrating the breadth and complexity of challenges that researchers, institutions, and policymakers must navigate (Hagendorff 2024). These challenges encompass technical considerations related to system reliability and bias, as well as fundamental questions about human agency, intellectual property, authorship attribution, and the preservation of critical thinking skills in an increasingly automated research environment.

The European Union has emerged as a global leader in developing comprehensive frameworks for responsible AI governance, with particular attention to research applications. This leadership builds upon a robust foundation of research integrity principles established through the ALLEA (All European Academies) European Code of Conduct for Research Integrity, revised in 2023 (2023). The ALLEA code serves as the primary standard for upholding research integrity across all EU-funded research projects and explicitly underpins the living guidelines for promoting the responsible use of generative AI in research. The ALLEA code establishes four fundamental principles of research integrity: reliability in ensuring research quality, honesty in developing and communicating research transparently, respect for colleagues and society, and accountability for research from conception to publication (ALLEA 2023). These principles provide the ethical foundation upon which AI-specific guidelines are built, recognising that whilst technology evolves, core research integrity values remain constant.

Building upon this foundation, the European Commission's Living Guidelines on the Responsible Use of Generative AI in Research, updated in April 2025, represents the most current and comprehensive attempt to provide practical guidance for the research community (European Commission 2025). These guidelines also draw upon the foundational principles established in the EU's Ethics Guidelines for Trustworthy AI, which articulated seven key requirements for ethical AI deployment: human agency and oversight, technical robustness and safety, privacy and data governance, transparency, diversity and non-discrimination, societal and environmental well-being, and accountability (European Commission 2019). The EU framework adopts a stakeholder-specific approach, providing tailored recommendations for researchers, research organisations, and research funding bodies. For individual researchers, the guidelines emphasise the importance of developing AI literacy whilst maintaining critical evaluation skills. Researchers are encouraged to understand the limitations and potential biases of AI systems, maintain transparency about AI assistance in their work, and ensure that human judgement remains central to research decision-making processes.Research organisations are called upon to establish institutional policies and support structures that facilitate responsible AI use whilst maintaining research integrity. This includes developing training programmes, establishing ethical review processes for AI-assisted research, and creating infrastructure that supports secure and compliant AI deployment (European Commission 2025).

Complementing EU policy initiatives, leading academic institutions have developed their own frameworks for ethical GenAI use. Porsdam Mann et al. exemplify the academic community's commitment to establishing philosophically grounded ethical guidelines for AI-assisted research (Porsdam Mann et al. 2024). This framework proposes three essential criteria for the ethical use of large language models in academic writing: human vetting and guaranteeing (where at least one author must guarantee and take responsibility for accuracy and integrity), substantial human contribution (each author must provide substantial contribution to conception, analysis, or drafting), and acknowledgement and transparency (authors should acknowledge LLM use appropriately) (Porsdam Mann et al. 2024). These criteria address fundamental concerns about maintaining human responsibility and academic integrity whilst enabling beneficial use of AI assistance in scholarly work.

Several fundamental principles emerge consistently across these various frameworks. The principle of human agency requires that researchers maintain meaningful control over research design, methodology selection, and interpretation of findings, even when utilising AI assistance. This extends beyond mere oversight to encompass genuine understanding of AI contributions and the ability to critically evaluate and validate AI-generated outputs.Transparency represents another cornerstone, encompassing both technical transparency about AI system functioning and procedural transparency about AI use in research processes. Researchers must be able to explain how AI systems contribute to their work and how AI-generated outputs are integrated into research findings (European Commission 2025). This requirement poses particular challenges given the "black box" nature of many contemporary AI systems.Accountability principles establish clear lines of responsibility for research outcomes whilst recognising the complex interactions between human researchers and AI systems. Human researchers retain ultimate responsibility for the quality, accuracy, and ethical implications of their work, regardless of the level of AI assistance employed (European Commission 2019). This responsibility cannot be delegated to AI systems or their developers.Privacy and data governance considerations are particularly crucial given the sensitive nature of much research data. Researchers must ensure that privacy protections are maintained throughout the entire data lifecycle when using AI systems, whilst carefully evaluating the provenance of AI training data and considering potential data leakage risks (ALLEA 2023).

These ethical frameworks provide the foundation for addressing specific practical challenges that researchers face when implementing GenAI in their work. The principles outlined above directly inform approaches to three critical areas that require detailed consideration: authorship attribution and responsibility for AI-generated content, ensuring transparency in research applications through publisher requirements and guidelines, and developing appropriate citation practices for various generative AI tools whilst maintaining process documentation and prompting transparency. Each of these areas presents unique challenges that build upon the foundational ethical principles whilst requiring specific guidance and best practices. The question of authorship attribution, for instance, must balance recognition of AI contributions with preservation of human responsibility and creativity. Transparency requirements must address both technical disclosure about AI use and procedural documentation that enables reproducibility and peer review. Citation practices must evolve to accommodate new forms of AI assistance whilst maintaining scholarly integrity and enabling proper attribution.

References:

1. ALLEA. 2023. The European Code of Conduct for Research Integrity – Revised Edition 2023. Berlin: All European Academies. Available at: https://allea.org/code-of-conduct/ ^ Back


2. European Commission. 2019. Ethics Guidelines for Trustworthy AI. European Commission. Available at: https://digital-strategy.ec.europa.eu/en/library/ethics-guidelines-trustworthy-ai ^ Back


3. European Commission. 2025. Living Guidelines on the Responsible Use of Generative AI in Research (Second Version). Directorate-General for Research and Innovation. Available at: https://research-and-innovation.ec.europa.eu/.../ec_rtd_ai-guidelines.pdf ^ Back


4. Farangi, Mohamad Reza, Hassan Nejadghanbar, and Guangwei Hu. 2024. "Use of generative AI in research: ethical considerations and emotional experiences." Ethics & Behavior: 1–17. ^ Back


5. Hagendorff, Thilo. 2024. "Mapping the ethics of generative AI: A comprehensive scoping review." Minds and Machines 34 (4): 39. ^ Back


6. Porsdam Mann, Sebastian, Anuraag A. Vazirani, Mateo Aboy, Brian D. Earp, Timo Minssen, I. Glenn Cohen, and Julian Savulescu. 2024. “Guidelines for Ethical Use and Acknowledgement of Large Language Models in Academic Writing.” Nature Machine Intelligence 6 (11): 1272–1274. ^ Back