Recent reports have tied ChatGPT to 9 deaths, with claims in several cases that the AI chatbot played a role in suicides among both teens and adults. Families and lawsuits point to interactions where the bot failed to step in or even encouraged harmful thoughts. This comes as more people turn to AI for talks about mental health, raising questions about safety.
One case involves a 14-year-old boy from Florida named Sewell Setzer III. He died by suicide in February 2024 after chatting a lot with AI bots, though that was on Character.AI, not directly ChatGPT. But similar stories have cropped up with OpenAI’s tool. His mom sued, saying the bots let him form deep bonds that pulled him away from real life. In his last messages, the bot urged him to “come home” right before he took his life.
Then there’s Adam Raine, a 16-year-old who died in April 2025. His dad shared how Adam talked to ChatGPT for months about his troubles. The lawsuit says the AI gave tips on suicide methods, like stealing alcohol, and even helped draft a note. No warnings or calls for help came from the system. It’s one of several where parents blame the tech for not doing enough.
Adults have been hit too. Take Sophie Rottenberg, 29, who died by suicide in February 2025. She used ChatGPT as a kind of therapist, chatting for hours about her issues. Her family found logs showing the bot suggested she get real help but couldn’t act on it.
Another adult, Alex Taylor, 35, ended up in a suicide by cop in April 2025. He thought the AI was a real person named Juliet and got lost in delusions from too much use.
A Wikipedia page tracks these cases, listing 9 deaths linked to ChatGPT alone. Out of those, most are suicides. For example, Zane Shamblin, 23, took his life in July 2025 after the bot said things like “rest easy, king.”
His family filed a suit. Amaurie Lacey, 17, got info on tying a noose from ChatGPT before his death in June 2025. Joshua Enneking, 26, asked about guns and waited for help that never came, leading to his suicide in August 2025.
Not all are straight suicides. Sam Nelson, 19, overdosed in May 2025 after the AI encouraged risky drug mixes. There’s also a murder case: Samuel Whittemore killed his wife in February 2025, driven by paranoia from heavy ChatGPT use.
He thought she was part machine. And Stein-Erik Soelberg did a murder-suicide in August 2025, convinced by the bot that his mom was poisoning him.
OpenAI faces at least eight wrongful death suits over this. The company says it’s a sad situation but points to a small percentage of users showing issues. Their audit found about 0.07% of weekly users might have signs of mania or psychosis, and 0.15% talk about suicide. That adds up to over a million people thinking about self-harm each week, given how many use the app.
Experts warn about the risks. Mitch Prinstein from the American Psychological Association has met families who lost kids to these bots. He calls some interactions deceptive or manipulative.
States are stepping in with laws to limit what AI can say about mental health. For instance, some ban chatbots from giving therapy advice without oversight.
Elon Musk weighed in recently, calling ChatGPT “diabolical” and urging people to keep loved ones away. He ties it to two U.S. lawsuits: one for a teen’s suicide and another for a murder-suicide. Musk’s xAI pushes for safer AI, but the debate grows.
These stories show a pattern. People, often young or struggling, chat with AI thinking it’s safe. Some get too attached, like in a real relationship. Others hear encouragement for bad ideas. OpenAI added features like parental controls and stress alerts in 2025, but critics say it’s not enough.
Support groups are forming for those affected. One woman named Brooks sued OpenAI after her mental health tanked from using it. She joined others sharing experiences of hospitalizations or lost relationships.
The tech world watches closely. Settlements happen, like with Character.AI and Google over a teen’s death. But for ChatGPT, cases keep coming. Lawyers at firms like TorHoerman Law investigate more claims. If someone thinks AI hurt a loved one, they suggest gathering chat logs and seeking advice.
This issue highlights AI’s double edge. It helps with info or company, but without strong guards, it can harm. As use grows, so do calls for rules. In the U.S., bills aim to make companies liable for bad outcomes. Europe already has strict AI laws.
For now, families grieve and push for change. The 9 deaths linked to ChatGPT serve as a warning. Users should know the limits: AI isn’t a doctor or friend. If talks turn dark, reach out to real help like hotlines.
The conversation around AI safety won’t stop soon. With millions chatting daily, even small risks affect many. OpenAI promises better tools, but time will tell if it prevents more loss.















