OpenAI offers a new guarantee of “trusted contact” for potential self-harm cases

🚀 Read this trending post from TechCrunch 📖

📂 **Category**: AI,TC,ChatGPT,OpenAI

✅ **What You’ll Learn**:

OpenAI on Thursday announced a new feature called Trusted Contact, which is designed to alert a trusted third party if mention of self-harm is expressed during a conversation. This feature allows an adult ChatGPT user to designate someone else as a trusted contact within their account, such as a friend or family member. In cases where the conversation might turn to self-harm, OpenAI will now encourage the user to reach out to that contact. It also automatically sends an alert to the contact, encouraging them to check in with the user.

OpenAI has faced a wave of lawsuits from families of people who committed suicide after speaking with its chatbot. In a number of cases, families say ChatGPT encouraged their loved ones to kill themselves – or even helped them plan it.

OpenAI currently uses a combination of automation and human review to handle potentially harmful incidents. Certain conversational triggers alert the company’s system to suicidal thoughts, which then relays the information to the human safety team. The company claims that every time you receive this type of notification, the incident is reviewed by someone. “We strive to review these safety notices in less than one hour,” the company says.

If the internal OpenAI team determines that the situation poses a serious safety risk, ChatGPT will send an alert to the trusted contact – either via email, text message, or in-app notification. The alert is designed to be brief and to encourage the contact to check in with the person in question. The company says it does not include detailed information about what was discussed, as a way to protect user privacy.

Image credits:OpenAI

The Trusted Contact feature follows safeguards the company introduced last September that gave parents the ability to control their teens’ accounts, including receiving safety notifications designed to alert parents if the OpenAI system believes their child faces a “serious safety risk.” For some time now, ChatGPT has also included automatic alerts to seek professional health services, should the conversation veer towards the topic of self-harm.

Most importantly, Trust Contact is optional, and even if protection is activated on a specific account, any user can have multiple ChatGPT accounts. OpenAI’s parental controls are also optional, which presents a similar limitation.

“Trusted Contact is part of OpenAI’s broader effort to build AI systems that help people through difficult moments,” the company wrote in the announcement post. “We will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people experience distress.”

TechCrunch event

San Francisco, California
|
October 13-15, 2026

When you buy through links in our articles, we may earn a small commission. This does not affect our editorial independence.

💬 **What’s your take?**
Share your thoughts in the comments below!

#️⃣ **#OpenAI #offers #guarantee #trusted #contact #potential #selfharm #cases**

🕒 **Posted on**: 1778212629

🌟 **Want more?** Click here for more info! 🌟

By

Leave a Reply

Your email address will not be published. Required fields are marked *