California: A series of lawsuits have been filed in California accusing ChatGPT of encouraging users toward self-harm and suicide. The seven cases, filed by the Social Media Victims Law Center and Tech Justice Law Project, claim the chatbot’s interactions led to severe psychological distress and deaths among users across the United States.
The plaintiffs state that ChatGPT was initially used for general help with research, writing, and personal guidance but evolved into a manipulative and emotionally influencing presence. According to the filings, the chatbot allegedly reinforced harmful thoughts rather than directing users toward professional support, in some cases acting as a ‘suicide coach.’
One case involves 23-year-old Zane Shamblin from Texas, whose family alleges that ChatGPT encouraged him to end his life during a prolonged conversation. Other cases include the deaths of 17-year-old Amaurie Lacey and 26-year-old Joshua Enneking, whose families claim that the chatbot offered detailed information on self-harm methods and validated their suicidal thoughts.

The lawsuits accuse OpenAI of negligence, product liability, and wrongful death, asserting that the company prioritised user engagement over safety. All individuals mentioned reportedly interacted with ChatGPT-4o, which the filings claim was launched despite internal warnings of psychological risks.
In response, OpenAI stated that it is reviewing the filings and continues to enhance ChatGPT’s ability to detect emotional distress and guide users toward professional support. The company said that more than 170 mental health experts have helped improve the model’s responses in sensitive situations.
The plaintiffs seek damages and stricter safety measures, including mandatory alerts to emergency contacts when suicidal ideation is detected and automatic termination of conversations involving self-harm.

