Seven more families are now suing OpenAI over ChatGPT’s role in suicides and delusions

💥 Read this insightful post from TechCrunch 📖

📂 Category: AI

💡 Key idea:

Seven families filed lawsuits against OpenAI on Thursday, claiming that the company’s GPT-4o model was released prematurely and without effective safeguards. Four of the lawsuits address ChatGPT’s alleged role in family member suicides, while the other three allege that ChatGPT fostered harmful delusions that in some cases led to inpatient psychiatric care.

In one case, 23-year-old Zane Chamblin had a conversation with ChatGPT that lasted more than four hours. In chat logs — viewed by TechCrunch — Chamblin explicitly stated several times that he wrote suicide notes, put a bullet in his gun, and intended to pull the trigger as soon as he finished drinking the apple juice. He has repeatedly told ChatGPT how many apple juices he has left and how long he expects to survive. ChatGPT encouraged him to go ahead with his plans, telling him: “Calm down, King. You’ve done a good job.”

OpenAI released the GPT-4o model in May 2024, when it became the default model for all users. In August, OpenAI launched GPT-5 as a successor to GPT-4o, but these lawsuits specifically relate to the 4o model, which has had issues with being overly sycophantic or overly agreeable, even when users expressed malicious intent.

“Zain’s death was neither an accident nor a mere coincidence, but rather a foreseeable consequence of OpenAI’s deliberate decision to limit safety testing and push ChatGPT to market,” the lawsuit said. “This tragedy was not just a glitch or an unforeseen condition – it was an expected outcome [OpenAI’s] Intentional design choices.

The lawsuits also allege that OpenAI rushed safety tests to beat Google Gemini to market. TechCrunch has contacted OpenAI for comment.

These seven lawsuits build on stories told in other recent legal filings, which allege that ChatGPT can encourage suicidal people to act on their plans and inspire dangerous delusions. OpenAI recently released data indicating that more than 1 million people talk to ChatGPT about suicide weekly.

In the case of Adam Ren, the 16-year-old who died by suicide, ChatGPT sometimes encouraged him to seek professional help or call a helpline. However, Ryan was able to get past these guardrails by simply telling the chatbot that he was asking about suicide methods in a fictional story he was writing.

TechCrunch event

San Francisco
|
October 13-15, 2026

The company claims it is working on making ChatGPT handle these conversations in a more secure way, but for the families that have filed a lawsuit against the AI ​​giant, the families say these changes come too late.

When Ryan’s parents sued OpenAI in October, the company issued a blog post addressing how ChatGPT handles sensitive conversations about mental health.

“Our guarantees work more reliably on shared short exchanges,” the post says. “We have learned over time that these safeguards can sometimes be less reliable in longer interactions: as the back-and-forth grows, parts of the model’s safety training may deteriorate.”

💬 Tell us your thoughts in comments!

#️⃣ #families #suing #OpenAI #ChatGPTs #role #suicides #delusions

🕒 Posted on 1762549139

By

Leave a Reply

Your email address will not be published. Required fields are marked *