OpenAI says more than a million people talk to ChatGPT about suicide weekly

🚀 Discover this must-read post from TechCrunch 📖

📂 Category: AI,AI chatbots,ChatGPT,mental health,OpenAI

📌 Main takeaway:

OpenAI released new data on Monday showing how many ChatGPT users with mental health issues are talking to an AI-powered chatbot about it. The company says that 0.15% of active ChatGPT users in a given week have “conversations that include clear indicators of potential suicidal planning or intent.” Considering that ChatGPT has over 800 million weekly active users, that means over a million people per week.

The company says that a similar percentage of users show “high levels of emotional connection to ChatGPT,” and that hundreds of thousands of people show signs of psychosis or mania in their weekly conversations with the AI-powered chatbot.

OpenAI says these types of conversations in ChatGPT are “extremely rare” and therefore difficult to measure. However, OpenAI estimates that these issues affect hundreds of thousands of people every week.

OpenAI shared the information as part of a broader announcement about its recent efforts to improve how models respond to users with mental health issues. The company claims that its latest work on ChatGPT involves consulting with more than 170 mental health experts. These clinicians noticed that the latest version of ChatGPT “responds more appropriately and consistently than previous versions,” OpenAI says.

In recent months, several stories have highlighted how AI-powered chatbots are negatively impacting users with mental health challenges. Researchers have previously found that AI chatbots can lead some users down imaginary rabbit holes, largely by reinforcing dangerous beliefs through ingratiating behavior.

Addressing mental health concerns in ChatGPT has become an existential issue for OpenAI. The company is currently being sued by the parents of a 16-year-old boy, who confided his suicidal thoughts to ChatGPT in the weeks leading up to his suicide. Attorneys general in California and Delaware – who may block the company’s planned restructuring – have also warned OpenAI that it needs to protect young people who use its products.

Earlier this month, OpenAI CEO Sam Altman claimed in a post on X that the company had “managed to mitigate serious mental health issues” in ChatGPT, though he did not provide details. The data shared on Monday appears to be evidence of that claim, though it raises broader issues about how widespread the problem is. However, Altman said OpenAI will ease some of the restrictions, even allowing adult users to start having exciting conversations with the AI-powered chatbot.

TechCrunch event

San Francisco
|
October 27-29, 2025

In an announcement on Monday, OpenAI claims that the newly updated version of GPT-5 responds with approximately 65% ​​more “desirable responses” to mental health issues than the previous version. In an evaluation testing AI responses around suicidal conversations, OpenAI says its new GPT-5 model is 91% consistent with a company’s desired behaviors, compared to 77% for the previous GPT-5 model.

The company also says that the newer version of GPT-5 also complies with OpenAI guarantees better in long conversations. OpenAI previously noted that its security measures were less effective in longer conversations.

On top of these efforts, OpenAI says it is adding new assessments to measure some of the most serious mental health challenges facing ChatGPT users. The company says basic safety testing of the AI ​​models will now include measures for emotional dependence and non-suicidal mental health emergencies.

OpenAI also recently rolled out more controls for parents of children using ChatGPT. The company says it is building an age prediction system to automatically detect children using ChatGPT, and is enforcing a stricter set of safeguards.

However, it is unclear how persistent mental health challenges related to ChatGPT are. While GPT-5 appears to be an improvement over previous AI models in terms of safety, there is still a segment of ChatGPT responses that OpenAI considers “unwanted.” OpenAI is also still making older, less secure AI models, including GPT-4o, available to millions of paying subscribers.

💬 Share your opinion below!

#️⃣ #OpenAI #million #people #talk #ChatGPT #suicide #weekly

🕒 Posted on 1761595968

By

Leave a Reply

Your email address will not be published. Required fields are marked *