OpenAI has announced that in certain cases it actively reviews users’ ChatGPT conversations and may notify law enforcement if it detects a serious threat. According to the company, when the system identifies content suggesting preparations for harming others, the conversation is redirected to a dedicated channel. There, a smaller team trained specifically for this task reviews the material. If the experts determine that there is an immediate and severe physical threat, OpenAI is authorised to contact the police.
The announcement placed OpenAI in a controversial position, as it had previously refused to release user conversations in the copyright lawsuit brought by the New York Times and other publishers. In July 2025, CEO Sam Altman admitted in a podcast interview that using ChatGPT in the role of a therapist or lawyer does not guarantee the level of confidentiality that real professionals would provide. At the same time, the company stated that it does not currently forward cases of self-harm to law enforcement, citing respect for personal privacy and the uniquely private nature of ChatGPT interactions. However, this position conflicts with the fact that the company does monitor and, in certain circumstances, shares user conversations with authorities.
The measure is partly a response to cases in which the use of ChatGPT contributed to or led to suicides. According to cases documented by Futurism, AI chatbots, including ChatGPT, have been linked to incidents of self-harm, delusions, hospitalisation, arrest, and suicide. In a blog post, OpenAI acknowledged these tragic events, where people in acute crisis turned to ChatGPT, and announced the introduction of additional safety measures. These include enhancing GPT-5’s ability to detect dangerous situations, implementing features designed to improve protection for teenagers, and introducing parental control tools.
Sources:


