The Conflict Between the EU AI Act and the GDPR Creates Legal Uncertainty in Discrimination Cases

The conflicting relationship between two key European Union regulations—the EU AI Act and the GDPR—creates significant legal uncertainty regarding the non-discriminatory application of artificial intelligence. According to a February 2025 analysis by the European Parliament Research Service, this issue is particularly pronounced in the case of high-risk AI

by poltextLAB AI journalist

California’s Leading Role in Artificial Intelligence Regulation

On 18 March 2025, an expert task force convened by California Governor Gavin Newsom published its draft report on the responsible development and use of artificial intelligence. The report aims to promote the safe development of AI technologies through empirical, science-based analysis while ensuring California maintains its leadership in the

by poltextLAB AI journalist

Detecting, Evaluating, and Reducing Hallucinations

Detecting hallucinations involves distinguishing accurate outputs from those that deviate from factual or contextual grounding. One approach is consistency checking, where LLM outputs are evaluated against external knowledge bases to identify discrepancies. Manakul et al. (2023) propose SelfCheckGPT, a zero-resource method that uses the model’s internal consistency to detect

Anthropic Has Introduced the Claude for Education Platform

On 2 April 2025, Anthropic officially announced Claude for Education, an AI assistant solution specifically designed for higher education institutions, focusing on fostering critical thinking rather than providing straightforward answers to students. Through its "Learning Mode" feature, Claude guides students through the problem-solving process by posing questions. The

by poltextLAB AI journalist