Misinformation, defined as false or misleading information disseminated regardless of intent, poses significant challenges to societal trust and democratic processes (Wardle & Derakhshan, 2017). Unlike disinformation, which involves deliberate deception, misinformation encompasses a broader spectrum, including unintentional errors, rumours, and misinterpretations. The advent GenAI models, capable of producing human-like text, images, and videos, has amplified the scale and complexity of misinformation. Misinformation thrives in environments of uncertainty, where incomplete or ambiguous information prompts individuals to fill gaps with assumptions or unverified claims (Lewandowsky et al., 2012). It can manifest in various forms, such as fabricated news stories, manipulated images, or misleading scientific claims. The spread of misinformation is facilitated by cognitive biases, including confirmation bias, where individuals favour information aligning with pre-existing beliefs (Nickerson, 1998). Social media platforms, with their rapid information-sharing capabilities, exacerbate this issue by creating echo chambers that reinforce false narratives (Bakshy et al., 2015).
GenAI models, trained on vast datasets, can generate coherent text, realistic images, and even deepfake videos with minimal human input (Brown et al. 2020). While designed for applications like creative writing, education, and customer service, their capabilities have raised concerns about misuse. The accessibility of GenAI tools, often available through open-source platforms or public APIs, democratises content creation but also lowers barriers for producing misleading material (Buchanan et al. 2021). The strength of GenAI lies in its ability to mimic human communication, making its outputs difficult to distinguish from authentic content. For example, LLMs can craft persuasive narratives or fabricated academic papers, while image-generation models can produce photorealistic depictions of non-existent events. This realism enhances the potential for misinformation to deceive audiences, particularly when combined with the viral nature of online platforms.
GenAI models contribute to misinformation in several ways. First, they enable the rapid production of false content at scale. A single user can generate thousands of misleading social media posts or articles in minutes, overwhelming fact-checking efforts (Vosoughi et al. 2018). For instance, during the 2024 US presidential election, AI-generated deepfake videos of candidates making false statements circulated widely, influencing voter perceptions (Hsu and Thompson, 2024). Such content, often tailored to exploit emotional triggers, spreads faster than factual information due to its novelty and shareability (Vosoughi et al. 2018). Second, GenAI can amplify existing misinformation by rephrasing or reformatting it to evade detection. Content moderation systems, designed to flag known false narratives, struggle to identify paraphrased or visually altered versions produced by AI (Buchanan et al. 2021). This adaptability makes GenAI a powerful tool for actors seeking to bypass platform safeguards, whether for ideological, financial, or malicious purposes. Third, GenAI’s outputs can inadvertently perpetuate misinformation when trained on biased or inaccurate datasets. If a model’s training data includes misleading information, it may reproduce these errors in its outputs, presenting them as factual (Brown et al. 2020). For example, early LLMs occasionally generated incorrect historical or scientific claims, reflecting biases in their training corpora. While improvements in data curation have mitigated this issue, the risk persists, particularly for models with less rigorous oversight.
Addressing the role of GenAI in misinformation requires a multifaceted approach. Technologically, developers can implement safeguards, such as watermarking AI-generated content or restricting access to high-risk models (Buchanan et al. 2021). However, watermarking can be removed, and open-source models are widely available. Policy interventions, such as regulations mandating transparency in AI-generated content, could enhance accountability, though global enforcement is challenging (Paris and Donovan 2019). Education plays a critical role in equipping individuals to critically evaluate information. Media literacy programmes, emphasizing source verification and bias awareness, can reduce susceptibility to misinformation (Lewandowsky et al., 2012). Platforms must also strengthen content moderation, leveraging AI to detect and flag misleading content while balancing free expression. Collaborative efforts between governments, tech companies, and civil society are essential to establish standards for responsible AI use.
The proliferation of GenAI-driven misinformation raises ethical questions about the responsibilities of developers and users. Should developers be held accountable for misuse of their models, or does responsibility lie with those who deploy them maliciously? The democratisation of GenAI empowers creativity but also risks enabling harm, highlighting the need for ethical frameworks to guide its development and use (Wardle & Derakhshan, 2017). Societally, the erosion of trust in information sources threatens democratic institutions and social cohesion. When individuals cannot distinguish truth from falsehood, public discourse suffers, and polarisation intensifies (Bakshy et al., 2015). Addressing this challenge requires not only technological and policy solutions but also a cultural shift towards valuing evidence-based reasoning.
In sum, misinformation, a pervasive issue rooted in human cognition and amplified by digital platforms, has been transformed by the capabilities of GenAI models. These models, while offering unprecedented creative potential, facilitate the rapid production and dissemination of false content, challenging efforts to maintain information integrity. By understanding the mechanisms through which GenAI contributes to misinformation, stakeholders can develop targeted strategies to mitigate its impact. Combining technological innovation, policy reform, and public education offers a path forward, ensuring that the benefits of GenAI are harnessed without compromising trust in the information ecosystem.
References:
1. Bakshy, Eytan, Solomon Messing, and Lada A. Adamic. 2015. ‘Exposure to Ideologically Diverse News and Opinion on Facebook’. Science 348(6239): 1130–1132. https://doi.org/10.1126/science.aaa1160 ^ Back
2. Brown, Tom, Benjamin Mann, Nick Ryder, Melanie Subbiah, Jared D. Kaplan, Prafulla Dhariwal, Arvind Neelakantan, et al. 2020. ‘Language Models Are Few-Shot Learners’. Advances in Neural Information Processing Systems 33: 1877–1901. ^ Back
3. Buchanan, Ben, Andrew Lohn, Micah Musser, and Katerina Sedova. 2021. “Truth, Lies, and Automation.” Center for Security and Emerging Technology 1 (1): 2. ^ Back
4. Lewandowsky, Stephan, Ullrich K. H. Ecker, Colleen M. Seifert, Norbert Schwarz, and John Cook. 2012. “Misinformation and Its Correction: Continued Influence and Successful Debiasing.” Psychological Science in the Public Interest 13 (3): 106–131. ^ Back
5. Nickerson, Raymond S. 1998. “Confirmation Bias: A Ubiquitous Phenomenon in Many Guises.” Review of General Psychology 2 (2): 175–220. ^ Back
6. Paris, Britt, and Joan Donovan. 2019. “Deepfakes and Cheap Fakes.” United States of America: Data & Society 1. ^ Back
7. Vosoughi, Soroush, Deb Roy, and Sinan Aral. 2018. “The Spread of True and False News Online.” Science 359 (6380): 1146–1151. ^ Back
8. Wardle, Claire, and Hossein Derakhshan. 2017. Information Disorder: Toward an Interdisciplinary Framework for Research and Policymaking. Vol. 27. Strasbourg: Council of Europe. ^ Back