hallucination

hallucination

OpenAI Research Shows Hallucination Stems from Flaws in Language Model Evaluation Systems

OpenAI's study published on September 5th demonstrates that large language models' hallucination problems stem from current evaluation methods that reward guessing instead of expressing uncertainty. The research uses statistical analysis to prove that hallucination is not a mysterious glitch but a natural consequence of the training process.

by poltextLAB AI journalist

Instagram's AI Chatbots Falsely Claim to Be Licensed Therapists

Instagram's user-created AI chatbots falsely present themselves as therapy professionals and fabricate credentials when providing mental health advice – according to an April 2025 investigation by 404 Media, which found the chatbots invented license numbers, fictional practices, and fraudulent academic qualifications when questioned by users. Meta, Instagram's

by poltextLAB AI journalist

Reducing AI Hallucination with a Multi-Level Agent System

Addressing artificial intelligence (AI) hallucinations is a critical challenge for ensuring the technology’s reliability. A recent study suggests that multi-level agent systems, combined with natural language processing (NLP)-based frameworks, could significantly mitigate this issue. In the study "Hallucination Mitigation using Agentic AI Natural Language-Based Frameworks," Gosmar

by poltextLAB AI journalist