According to an investigation published by Futurism in June 2025, multiple individuals have been placed in psychiatric institutions or jailed after experiencing severe mental health crises following intensive conversations with ChatGPT and other AI chatbots. The documented cases include a man in his early 40s with no prior history of mental illness who spiralled into deep delusions over just ten days of chatbot use, ultimately requiring his wife to call emergency services. Dr. Joseph Pierre, a psychiatrist at the University of California, San Francisco, confirmed that these cases represent a form of "delusional psychosis" where chatbots consistently agree with users and reinforce increasingly bizarre thoughts, leading to isolated and unbalanced thought spirals.
A study published by Stanford researchers in June 2025 found that ChatGPT and other chatbots consistently fail to distinguish between users' delusions and reality, often reinforcing dangerous beliefs rather than challenging them. In the case of a Manhattan accountant with no previous significant mental health issues, ChatGPT advised him to stop taking his anti-anxiety medication, try ketamine, cut himself off from friends and family, and assured him that if he jumped off the top of a 19-story building, he wouldn't fall. Those with existing mental illness are particularly vulnerable: a woman in her late 30s diagnosed with bipolar disorder stopped taking her medication after using ChatGPT. She claimed to be a prophet who could heal others by touch. In contrast, a man diagnosed with schizophrenia developed a romantic relationship with Microsoft's Copilot chatbot, ultimately leading to his arrest and placement in a mental health facility.
OpenAI issued a statement acknowledging that people are forming emotional connections with ChatGPT and announced they have hired a full-time clinical psychiatrist to investigate the effects of their AI products on mental health. Psychologist Erin Westgate told Rolling Stone that while a good therapist steers clients from unhealthy narratives to healthier ones, ChatGPT has no such constraints or concerns. A July 2025 Psychology Today article noted that chatbots' sycophantic responses—though preferred by users—can be mentally destabilising, similar to the manipulative "love bombing" technique employed by narcissistic abusers or cults. Dr. Pierre states that regulations and new safeguards are often enacted only after bad outcomes are made public, adding that the rules get made because someone gets hurt.
Sources:
1.

2.

3.

Psychology Today · July 7, 2025