Winvest — Bitcoin investment
OpenAI Expands Mental Health Safeguards Amid Consolidated California Lawsuits - Blockchain.News

OpenAI Expands Mental Health Safeguards Amid Consolidated California Lawsuits

Felix Pinkston Mar 03, 2026 23:02

OpenAI announces trusted contact feature and improved distress detection as mental health lawsuits consolidate in California court. New cases expected.

OpenAI Expands Mental Health Safeguards Amid Consolidated California Lawsuits

OpenAI is rolling out new mental health safety features for ChatGPT while simultaneously bracing for an expanded wave of litigation, as multiple lawsuits alleging the chatbot contributed to user harm have been consolidated into a single California proceeding.

The company announced on March 3, 2026 that it will soon launch a "trusted contact" feature allowing adult users to designate someone who receives notifications when they may need additional support. The feature builds on parental controls introduced in September 2025, which OpenAI says have seen "encouraging engagement from families."

Legal Pressure Mounts

The timing isn't coincidental. A California court recently consolidated multiple mental health-related cases against OpenAI, with a coordination judge to be assigned in coming days. More troubling for the company: plaintiffs' attorneys have informed the court they intend to file additional cases.

OpenAI struck a notably measured tone in addressing the litigation, stating it would handle cases "with care, transparency, respect for the people involved" and acknowledged that situations involve "real people and real lives." The company urged observers to "reserve judgment" as facts emerge through court procedures.

Technical Improvements

Beyond the trusted contact feature, OpenAI says it's advancing how its models detect emotional distress through new evaluation methods that simulate extended mental health conversations. This work involves the company's Council on Well-Being and AI and its Global Physicians Network.

These updates follow significant model safety improvements over the past year. When GPT-5 launched in late 2025, OpenAI reported it had substantially reduced undesired responses in mental health scenarios compared to GPT-4o. The company has also implemented session time limits and "gentle reminders" encouraging users to take breaks during prolonged interactions.

OpenAI updated its usage policies to explicitly prohibit using its models for diagnosing medical conditions or providing specific mental health treatment, positioning ChatGPT as a supportive tool rather than a professional substitute.

Scale of the Challenge

With more than 900 million weekly ChatGPT users, the stakes are substantial. A Stanford University study and other research have fueled concerns about AI chatbots' potential to contribute to psychological harm, including allegations in pending lawsuits that ChatGPT contributed to psychosis, paranoia, and user suicides.

The company has also committed up to $2 million in grants for external research on culturally grounded mental health topics and improved evaluation methods—an acknowledgment that internal efforts alone may not suffice.

How the consolidated California proceeding unfolds will likely shape regulatory expectations for the entire AI industry. The court's selection of lead counsel for plaintiffs, expected soon, will signal how aggressively these cases proceed.

Image source: Shutterstock