Authorship Attribution and Responsibility for AI-Generated Content: Can Generative AI Tools Be Authors?

Authorship Attribution and Responsibility for AI-Generated Content: Can Generative AI Tools Be Authors?
Source: Freepik - ar_fp

The proliferation of various generative AI tools raises profound questions about authorship in scientific publications, particularly whether GenAI systems can be considered authors and how responsibility for their output should be shared. Traditional notions of authorship emphasise human agency, accountability, and intellectual contribution, as outlined in established guidelines from bodies like the International Committee of Medical Journal Editors (ICMJE), which require authors to make substantial contributions, draft or revise work critically, approve the final version, and agree to be accountable for all aspects (ICMJE 2024). GenAI, however, operates through probabilistic pattern-matching rather than genuine understanding or ethical judgement, challenging these criteria. This section examines the attribution of authorship to GenAI, the allocation of responsibility for AI-generated content, prevailing journal policies on the matter, and illustrative cases where misuse has led to significant problems.

GenAI tools cannot be considered authors under current scholarly frameworks, primarily because they lack the capacity for accountability and intentionality. Authorship traditionally implies a moral and legal responsibility that machines cannot fulfil, as they do not possess consciousness or the ability to defend their work. For instance, the Committee on Publication Ethics (COPE) asserts that AI tools fail to meet authorship requirements since they cannot assume responsibility for submitted content, including accuracy, integrity, or potential conflicts of interest (COPE Council 2024). Foundational literature on authorship, such as that from Kassirer, reinforces this by emphasizing that each listed author must be able to take public responsibility for its content, a standard predating AI but directly applicable today (Kassirer 1995). In practical terms, GenAI systems like GPT-4 generate outputs based on training data, often producing plausible but inaccurate information—a phenomenon known as 'hallucination'—without the ability to verify or correct it. Consequently, attributing authorship to such tools would undermine the credibility of academic discourse, as authorship serves not only to credit but also to ensure traceability and ethical oversight.

Journal policies consistently prohibit GenAI tools from being listed as authors, emphasising that only natural persons qualify for such attribution. A survey of leading publishers reveals a consensus on this point, as summarised in the table below:

Publisher Authorship Policy for GenAI Source
Cambridge University Press AI does not meet the Cambridge requirements for authorship, given the need for accountability. AI and LLM tools may not be listed as an author on any scholarly work published by Cambridge. Link
Elsevier Authors should not list generative AI and AI-assisted technologies as an author or co-author, nor cite AI as an author. Authorship implies responsibilities and tasks that can only be attributed to and performed by humans. Link
Nature Publishing Group Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Link
Nature Publishing Group Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Link
Oxford University Press AI does not qualify as an author and should not be used to undertake primary authorial responsibilities, such as generating arguments and scientific insights, writing analysis, or drawing conclusions. Link
Palgrave Macmillan Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria (imprint editorial policy link). Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Link
Routledge, Taylor & Francis Generative AI tools must not be listed as an author, because such tools are unable to assume responsibility for the submitted content or manage copyright and licensing agreements. Authorship requires taking accountability for content, consenting to publication via a publishing agreement, and giving contractual assurances about the integrity of the work, among other principles. These are uniquely human responsibilities that cannot be undertaken by Generative AI tools. Link
SAGE As a publisher, Sage supports and believes in the value of human creativity and human authorship. Large Language Models (LLMs) cannot be listed as an author of a work, nor take responsibility for the text they generate. As such, human oversight, intervention and accountability is essential to ensure the accuracy and integrity of the content we publish. Link
Springer Large Language Models (LLMs), such as ChatGPT, do not currently satisfy our authorship criteria. Notably an attribution of authorship carries with it accountability for the work, which cannot be effectively applied to LLMs. Link
Wiley Copyright laws and protections vary globally, as do associated guidelines for AI-assisted content. Because copyright protection generally requires human authorship, AI-generated content without substantial human modification may not qualify for copyright protection. Link

Responsibility for AI-generated content invariably rests with human authors, who must oversee, verify, and disclose its use to maintain research integrity. Authors bear the onus of ensuring that GenAI outputs align with factual accuracy and ethical standards, including avoiding plagiarism or bias amplification from training datasets. Policies from major publishers stipulate that AI cannot be named as an author and that its application must be transparently reported, with humans accountable for any errors. This aligns with broader ethical considerations in AI ethics, where human oversight mitigates risks like misinformation. For example, if GenAI assists in drafting sections of a manuscript, authors must critically review and integrate it, treating the tool as an aid rather than a collaborator. Failure to do so can lead to breaches of integrity, as GenAI may inadvertently replicate copyrighted material or fabricate references, placing the burden on humans to rectify such issues.

Despite these safeguards, numerous cases illustrate the perils of mishandling GenAI in academic work, often resulting in retractions, ethical violations, and reputational damage. One notable incident involves a preprint where ChatGPT was credited as a co-author in a study on metaverse applications in education, prompting widespread disapproval from scientists who argued that AI lacks the accountability required for authorship. In another case, a paper in Nurse Education in Practice listed ChatGPT as an author, leading to debates on integrity and eventual scrutiny (Stokel-Walker 2023). A fraudulent scheme uncovered in the Global International Journal of Innovative Research involved AI-generated articles misattributed to non-existent authors, demonstrating risks of identity theft and evidence manipulation (Spinellis 2025). Google Scholar has been inundated with GPT-fabricated papers on controversial topics, spreading misinformation and evading detection until flagged (Haider et al. 2024). These examples, drawn from recent retractions exceeding 10,000 annually, underscore the exponential rise in AI-fueled misconduct, including hallucinatory references and plagiarised content (Van Noorden 2023).

In sum, the integration of GenAI into scholarly workflows offers undeniable benefits but demands rigorous governance to preserve trust. By denying authorship to AI tools and enforcing human responsibility, publishers and institutions mitigate risks while fostering innovation. As technologies evolve, ongoing revisions to guidelines, informed by cases of misuse, will be essential to balance progress with integrity. Ultimately, authorship remains a distinctly human endeavour, rooted in ethical accountability that machines cannot replicate.

References:

1. COPE Council. 2024. COPE position – Authorship and AI – English. Committee on Publication Ethics (CC BY-NC-ND 4.0). Available at: https://doi.org/10.24318/cCVRZBms ^ Back


2. Haider, Jutta, Kristofer Rolf Söderström, Björn Ekström, and Malte Rödl. 2024. GPT-fabricated scientific papers on Google Scholar: Key features, spread, and implications for preempting evidence manipulation. Harvard Kennedy School Misinformation Review 5, no. 5. ^ Back


3. ICMJE. 2024. Defining the Role of Authors and Contributors. International Committee of Medical Journal Editors. Available at: https://www.icmje.org/recommendations/browse/roles-and-responsibilities/defining-the-role-of-authors-and-contributors.html ^ Back


4. Kassirer, Jerome P. 1995. Authorship criteria. Science 268: 785–786. ^ Back


5. Stokel-Walker, Chris. 2023. ChatGPT listed as author on research papers: many scientists disapprove. Nature News, January 26. Available at: https://www.nature.com/articles/d41586-023-00107-z ^ Back


6. Van Noorden, Richard. 2023. More than 10,000 research papers were retracted in 2023 — a new record. Nature, 624 (21/28 December): 479-481. ^ Back