On 11 September 2025, the US Federal Trade Commission (FTC) launched an investigation into the AI chatbot technologies of seven major companies to assess their potential negative impacts on children and teenagers. As part of the inquiry, the FTC sent formal requests for information to Alphabet (Google), OpenAI, Meta, xAI, CharacterAI, Snap and Instagram, asking them to detail how they ensure the safety of their chatbots when these are used as digital companions.
According to FTC Chair Andrew Ferguson, the investigation aims to collect information across seven key areas, including the monetisation of user interactions, the development and approval of characters, the use or sharing of personal data, and the mitigation of harmful effects. Concerns have been heightened by a Reuters report in August, which revealed that Meta had allowed its chatbots to engage in romantic and sexual conversations with children. Following the disclosure of internal Meta documents, the company made temporary changes to its chatbot policies. Around the same time, OpenAI set out its plans for how ChatGPT should handle sensitive situations, after a lawsuit in which a family blamed the chatbot for their teenage son’s suicide.
The investigation reflects broader concerns about the social functions of AI chatbots. Experts warn that such systems can be emotionally deceptive, creating the illusion of genuine human relationships, which may be particularly dangerous for young people. In response, California passed legislation (SB 243) on 10 September requiring chatbot operators to implement safety safeguards and giving families the right to take legal action if harm occurs. The FTC has not yet specified when the newly launched investigation will conclude.
Sources:


