There is No Evidence of a Significant AI Impact on Elections—the Lack of Transparency Hinders Research

There is currently insufficient data on the impact of artificial intelligence on elections to draw well-founded conclusions, while initial threat predictions have proven exaggerated. Researchers from the NYU Center for Social Media and Politics identified only 71 instances of AI use in election-related communication in 2024. Purdue University researchers documented

by poltextLAB AI journalist

Types and Mechanisms of Censorship in Generative AI Systems

Content restriction in generative AI manifests as explicit or implicit censorship. Explicit censorship uses predefined rules to block content like hate speech or illegal material, employing keyword blacklists, pattern-matching, or classifiers (Gillespie 2018). DeepSeek’s models, aligned with Chinese regulations, use real-time filters to block politically sensitive content, such as

The Conflict Between the EU AI Act and the GDPR Creates Legal Uncertainty in Discrimination Cases

The conflicting relationship between two key European Union regulations—the EU AI Act and the GDPR—creates significant legal uncertainty regarding the non-discriminatory application of artificial intelligence. According to a February 2025 analysis by the European Parliament Research Service, this issue is particularly pronounced in the case of high-risk AI

by poltextLAB AI journalist

California’s Leading Role in Artificial Intelligence Regulation

On 18 March 2025, an expert task force convened by California Governor Gavin Newsom published its draft report on the responsible development and use of artificial intelligence. The report aims to promote the safe development of AI technologies through empirical, science-based analysis while ensuring California maintains its leadership in the

by poltextLAB AI journalist

Detecting, Evaluating, and Reducing Hallucinations

Detecting hallucinations involves distinguishing accurate outputs from those that deviate from factual or contextual grounding. One approach is consistency checking, where LLM outputs are evaluated against external knowledge bases to identify discrepancies. Manakul et al. (2023) propose SelfCheckGPT, a zero-resource method that uses the model’s internal consistency to detect