OpenAI Introduces Reasoning Model Routing and Parental Controls to Boost ChatGPT Safety

Key Points

  • OpenAI will route chats showing signs of acute distress to advanced reasoning models like GPT‑5‑thinking.
  • Parental controls will be introduced, allowing parents to link accounts and set age‑appropriate response rules.
  • Parents can disable memory and chat history and receive alerts if acute distress is detected in their teen.
  • Study Mode and session break reminders remain in place to promote healthy usage habits.
  • The safety upgrades are part of a 120‑day initiative targeting broader improvements within the year.
  • OpenAI is collaborating with mental‑health professionals via its Global Physician Network and Expert Council.

OpenAI to route sensitive conversations to GPT-5, introduce parental controls

Enhanced Routing for Sensitive Interactions

OpenAI disclosed a new real‑time router that can choose between efficient chat models and more sophisticated reasoning models based on conversation context. When the system identifies signs of acute distress, it will shift the user to a reasoning model like GPT‑5‑thinking, which spends more time processing the request and is designed to be more resistant to harmful prompts.

Parental Controls Rollout

The company is also preparing to release parental controls that allow parents to link to their teenager’s account. These controls are enabled by default and include age‑appropriate model behavior rules, the option to disable memory and chat history, and the ability to receive alerts if the system detects acute distress in a teen. Parents will be able to manage how the chatbot responds, providing a safeguard against potentially harmful content.

Study Mode and Ongoing Safety Efforts

OpenAI previously introduced a Study Mode to encourage critical thinking while studying, and it continues to embed in‑app reminders that suggest breaks during long and intensive sessions. The new safety features are part of a 120‑day initiative aimed at previewing and eventually launching comprehensive improvements within the year.

Collaboration with Mental‑Health Experts

To shape its safety strategy, OpenAI is partnering with specialists in areas such as eating disorders, substance use, and adolescent health through its Global Physician Network and an Expert Council on Well‑Being and AI. These collaborations are intended to define well‑being metrics, set priorities, and design future safeguards.

Source: techcrunch.com