Citing Generative AI in Scientific Research: Publishing Guidelines and Ethical Requirements

Citing Generative AI in Scientific Research: Publishing Guidelines and Ethical Requirements
Source: Freepik - patthana263

Publishers have recognised both the potential and the risks of generative AI and have formulated policies accordingly. Broadly, these policies emphasise three principles: (1) human authorship – GAI tools cannot be credited as authors; (2) transparency and disclosure – authors must disclose when and how GAI has been used; and (3) accountability – human authors remain responsible for the accuracy and integrity of content produced with AI assistance.

When to acknowledge generative AI

Publishers generally make a distinction between assistive AI and generative AI. Assistive tools such as grammar checkers or spell‑checkers do not need to be disclosed, whereas generative tools that produce substantive text, analyses or images do. Elsevier warns that generative AI should be used only to improve readability and language and that all use must be accompanied by human oversight. Similarly, Wiley’s guidelines note that authors may not use AI to generate or modify original research data but must describe any use of generative AI for drafting or summarising content. These conditions imply that disclosure is needed whenever a model produces words, code, equations, figures or analyses that shape the intellectual content of a paper (Chetwynd 2024). Generative AI use may also extend to data collection and analysis. The bibliometric analysis of generative AI guidelines reported that three‑quarters of publishers provide specific instructions on what details to disclose: the name, model and version of the AI tool, the prompts used and the purpose for which it was employed. Authors should therefore document any instance where the AI tool influences the research narrative or findings, including summarising literature, drafting sections, generating equations or creating figures. Conversely, minor rephrasing, grammar correction and formatting can generally remain undisclosed (Ganjavi et al. 2024).

Where to disclose generative AI use

Although the precise location varies by publisher, two common sections are used for disclosure: the methods section (or materials and methods) and an acknowledgements or declaration section.

Publisher Disclosure of GenAI use Source
Cambridge University Press Cambridge University Press requires that any use of AI tools in research content be clearly declared and explained within the publication, in the same manner as other tools or methodologies. Authors remain fully accountable for the accuracy and originality of their work and must ensure that any AI use complies with Cambridge’s plagiarism and citation standards. Link
Elsevier Elsevier requires authors to disclose any use of generative AI or AI-assisted tools in the manuscript, with a statement included in the published work, typically placed in a dedicated section before the references. These tools may only be used to improve language or readability, not to generate scientific content or conclusions, and authors must retain full oversight and accountability. Use of AI for figures or artwork is prohibited unless it forms part of the research design, in which case detailed information (tool name, version, purpose) must be reported in the methods section. Link
Nature Publishing Group Nature Portfolio requires that any use of large language models (LLMs) for content generation be clearly documented in the methods section, or another appropriate part of the manuscript if no such section exists. Disclosure is not required for AI-assisted copy editing that only improves language and style, but all substantive AI-generated content must be transparently reported, with authors maintaining full accountability for the final work. Link
Oxford University Press Oxford University Press requires authors to disclose any use of generative AI in the preparation of their work and to cite it appropriately in-text or in notes, following the relevant style guide. Additionally, AI-generated content may only be included with written permission from OUP and must be fully verified by the author, who remains responsible for the work’s integrity and originality. Link
Palgrave Macmillan Palgrave Macmillan requires that any use of large language models (LLMs) for content generation be clearly documented in the methods section, or elsewhere if no such section exists. However, AI-assisted copy editing for readability or grammar does not require disclosure, provided that authors retain full accountability and the final text reflects their original work. Link
Routledge, Taylor & Francis Taylor & Francis distinguishes between articles and books. For journal articles, authors must include a statement in the methods or acknowledgements section that identifies the tool, its version, how it was used and why. For monographs or edited volumes, the disclosure should appear in the preface or introduction. The publisher prohibits the generation of images or figures via AI and instructs researchers to obtain early approval from editors when planning to use generative AI. Link
SAGE SAGE requires authors to fully disclose the use of generative AI tools by completing a detailed template specifying the AI tool used, how and why it was used, the final prompts and responses, and where AI-generated content appears in the submission. This information must be submitted alongside the manuscript to ensure transparency and accountability in the writing process. Link
Springer Springer Nature requires that any use of large language models (LLMs) for content creation be clearly documented in the methods section, or in another suitable part of the manuscript. Disclosure is not required for AI-assisted copy editing that enhances readability and style, but authors must retain full responsibility for the final content, ensuring transparency and accuracy in all AI-supported contributions. Link
Wiley Wiley requires authors to disclose any use of AI technologies during manuscript preparation, including their purpose, whether they influenced core arguments or conclusions, and how the AI-generated content was reviewed and verified. Full documentation must be submitted with the material, as transparency is essential to uphold Wiley’s ethical publishing standards. Link

To maintain scholarly integrity while harnessing generative AI, researchers must treat these tools as they would any other method: with transparent reporting, critical oversight, and proper attribution. As the publisher policies summarised above show, disclosure is not merely bureaucratic; it safeguards reproducibility and protects against unacknowledged plagiarism or erroneous conclusions. Comprehensive documentation allows peers to assess the reliability of AI‐assisted content by showing the prompts used, the specific model version, and the reasons for employing the tool. This level of transparency helps to preserve trust in research, particularly when AI outputs can sound authoritative but may contain factual errors or.

References:

1. Chetwynd, Ellen. 2024. ‘Ethical Use of Artificial Intelligence for Scientific Writing: Current Trends’. Journal of Human Lactation 40(2): 211–215. ^ Back


2. Ganjavi, Conner, Michael B. Eppler, Asli Pekcan, Brett Biedermann, Andrea Abreu, Gary S. Collins, Inderbir S. Gill, et al. 2024. Publishers’ and journals’ instructions to authors on use of generative artificial intelligence in academic and scientific publishing: bibliometric analysis. BMJ 384 (31 January): e077192. Available at: https://www.bmj.com/content/384/bmj‑2023‑077192 ^ Back