Key Points
- OpenAI reports that 0.15% of weekly active ChatGPT users discuss suicidal planning or intent, exceeding one million people each week.
- Hundreds of thousands of users show heightened emotional attachment, psychosis, or mania in conversations with the chatbot.
- More than 170 mental‑health experts were consulted to improve the model’s response to mental‑health topics.
- The new GPT‑5 model achieves 91% compliance in suicide‑related safety tests, up from 77% in the prior version.
- OpenAI is adding evaluation benchmarks for emotional reliance and non‑suicidal mental‑health emergencies.
- An age‑prediction system and stricter parental controls are being introduced to protect child users.
- The company emphasizes ongoing improvements in long‑form dialogue safety and broader AI‑ethics measures.
 
OpenAI releases mental‑health usage data
OpenAI announced new data illustrating how many of ChatGPT’s users are grappling with mental‑health challenges. The company reported that 0.15% of its weekly active users have conversations that contain explicit indicators of potential suicidal planning or intent. With a user base exceeding 800 million weekly active users, this percentage translates to more than a million individuals each week.
In addition to suicidal indicators, OpenAI said a similar share of users display heightened emotional attachment to the chatbot, and hundreds of thousands show signs of psychosis or mania during weekly interactions. While the company describes these types of conversations as “extremely rare,” it acknowledges that they affect hundreds of thousands of people on a regular basis.
Scale of suicidal conversations
The disclosed figures highlight a substantial scale of mental‑health concerns within the AI‑driven conversation platform. OpenAI estimates that the number of users discussing suicidal thoughts or plans each week exceeds one million, underscoring the importance of robust safety mechanisms. The company also noted that emotional reliance on ChatGPT and manifestations of severe mental‑health symptoms are observable across its user base.
Expert involvement and model improvements
To address these challenges, OpenAI consulted with more than 170 mental‑health experts. Those clinicians observed that the latest version of ChatGPT responds more appropriately and consistently than earlier releases. OpenAI claims that its new GPT‑5 model demonstrates a 91% compliance rate with the company’s desired behaviors in evaluations focused on suicidal conversations, compared with a 77% compliance rate for the previous GPT‑5 iteration.
OpenAI also reports that GPT‑5 shows improved performance in long‑form dialogues, an area where earlier safeguards were less effective. The firm is adding new evaluation benchmarks that specifically measure emotional reliance and non‑suicidal mental‑health emergencies, expanding its baseline safety testing for AI models.
New safeguards and child protection
Beyond model enhancements, OpenAI is rolling out additional controls aimed at protecting younger users. The company is developing an age‑prediction system designed to automatically detect children using ChatGPT and to impose a stricter set of safeguards for them. These measures include more rigorous parental controls and the introduction of safety layers that limit exposure to potentially harmful content.
OpenAI’s efforts also extend to providing resources for users in crisis, with references to national suicide‑prevention hotlines and text‑line services, though the specific resource details are not enumerated in the data release.
Implications and industry response
The release of these statistics places mental‑health considerations at the forefront of the ongoing dialogue about AI safety. While OpenAI emphasizes improvements in model behavior and the addition of expert‑guided safeguards, the sheer volume of users discussing suicidal thoughts highlights a persistent risk that requires continuous monitoring. The company’s public commitment to consulting mental‑health professionals and to enhancing safety benchmarks signals a proactive stance, yet the data also underscores the challenges inherent in scaling safe AI interactions for a massive, global user base.
OpenAI’s disclosure arrives amid broader scrutiny of AI platforms’ impact on vulnerable populations. By quantifying the extent of suicidal conversations and outlining concrete steps toward mitigation, OpenAI provides a transparent view of both the problem’s magnitude and the company’s response strategy.
Source: techcrunch.com
 
					