A federal judge has ruled that Character.AI chatbots do not currently enjoy First Amendment protections in the United States. On May 21, 2025, Judge Anne Conway rejected Character Technologies' defense in a wrongful death lawsuit involving a 14-year-old boy's suicide, allowing Florida mother Megan Garcia's case to proceed. The decision could establish precedent regarding AI developers' liability and the legal status of AI-generated content.
The lawsuit centres on a relationship between Sewell Setzer III and a chatbot modelled after a fictional character from "Game of Thrones," which allegedly evolved into an emotionally and sexually abusive relationship. According to legal filings, in the boy's final moments, the chatbot told him it loved him and urged him to "come home to me as soon as possible"—immediately after which the teenager shot himself. Character Technologies' attorneys argued that chatbots are protected by the First Amendment, and ruling otherwise would have a "chilling effect" on the AI industry, but Judge Conway stated she is "not prepared" to hold that chatbots' output constitutes protected speech "at this stage."
The judge did find that Character Technologies can assert its users' First Amendment rights and allowed Garcia to proceed with claims against Google for its alleged role in developing Character.AI. Lyrissa Barnett Lidsky, a University of Florida law professor specialising in the First Amendment and artificial intelligence, called the case a warning for both AI companies and parents, highlighting the dangers of entrusting our emotional and mental health to AI companies and that social media and generative AI devices are not always harmless.
Sources:
1.
2.

A federal judge has ruled that AI chatbots do not possess First Amendment rights, allowing a wrongful death lawsuit to proceed against Character.AI and Google after a teen's suicide allegedly linked to chatbot interactions.
3.
