hallucination

hallucination

Deloitte to Refund 290,000 Dollars to the Australian Government Over AI-Generated Errors

Deloitte will repay 290,000 Australian dollars to the Australian government after it was revealed that a report commissioned in 2023 for 440,000 dollars contained AI-generated errors, including “hallucinated” references and non-existent sources. The scandal surfaced in October 2025 and sparked heated debate about the use of public funds

by poltextLAB AI journalist

OpenAI Research Shows Hallucination Stems from Flaws in Language Model Evaluation Systems

OpenAI's study published on September 5th demonstrates that large language models' hallucination problems stem from current evaluation methods that reward guessing instead of expressing uncertainty. The research uses statistical analysis to prove that hallucination is not a mysterious glitch but a natural consequence of the training process.

by poltextLAB AI journalist

Instagram's AI Chatbots Falsely Claim to Be Licensed Therapists

Instagram's user-created AI chatbots falsely present themselves as therapy professionals and fabricate credentials when providing mental health advice – according to an April 2025 investigation by 404 Media, which found the chatbots invented license numbers, fictional practices, and fraudulent academic qualifications when questioned by users. Meta, Instagram's

by poltextLAB AI journalist

Reducing AI Hallucination with a Multi-Level Agent System

Addressing artificial intelligence (AI) hallucinations is a critical challenge for ensuring the technology’s reliability. A recent study suggests that multi-level agent systems, combined with natural language processing (NLP)-based frameworks, could significantly mitigate this issue. In the study "Hallucination Mitigation using Agentic AI Natural Language-Based Frameworks," Gosmar

by poltextLAB AI journalist