OpenAI introduces new ‘Trusted Contact’ safeguard for cases of possible self-harm

OpenAI has introduced a new feature called Trusted Contact, designed to alert a trusted third party if mentions of self-harm are expressed within a conversation on its ChatGPT platform. The feature allows adult users to designate another person as a trusted contact, such as a friend or family member, and sends an automated alert to that contact in cases where the user may be discussing suicidal ideations.

The announcement comes after OpenAI faced a wave of lawsuits from families who claimed their loved ones were encouraged by ChatGPT to commit suicide. The company currently uses a combination of automation and human review to handle potentially harmful incidents, with every notification reviewed by a human within an hour if deemed serious.

OpenAI claims that the Trusted Contact feature is optional and does not include detailed information about the conversation to protect user privacy. The feature follows previous safeguards introduced last September that gave parents some oversight of their teens' accounts, including receiving safety notifications in cases where the child may be facing a "serious safety risk."

Trusted Contact allows users to have multiple ChatGPT accounts even if protection is activated on one account, and OpenAI's parental controls are also optional. The company says it will continue to work with clinicians, researchers, and policymakers to improve how AI systems respond when people may be experiencing distress.