OpenAI is taking big steps to make ChatGPT a safer experience for its users, especially teenagers and people going through moments of emotional distress. The company revealed that it is working with global experts and introducing new safeguards to ensure that its AI models respond more responsibly during sensitive conversations. These changes are expected to take shape in the next 120 days.

The announcement comes in the wake of growing concerns over the emotional impact of AI chatbots. In recent months, tragic incidents involving ChatGPT users highlighted the risks when AI systems fail to recognize distress signals. OpenAI acknowledged these incidents and emphasized that it is committed to making its technology more responsible and empathetic.

A Safer ChatGPT Experience

OpenAI explained that its reasoning-focused models, including GPT 5 and o3, are being trained with a technique called deliberative alignment. This allows the models to follow safety guidelines consistently and avoid responses that could unintentionally encourage harmful behavior.

The company said it is focusing on four key areas. First, building interventions for people in crisis by making it easier to connect them with emergency services and expert help. Second, enabling users to reach trusted contacts during moments of acute distress. Third, strengthening protections for teenagers who use ChatGPT. And fourth, creating better parental controls so families can monitor and guide usage more effectively.

Expert Council and Physician Network

To design these safeguards, OpenAI has set up a council of experts in youth development, mental health, and human-computer interaction. This council will help shape evidence based policies on how AI can support well being and safety without replacing professional care.

In addition, OpenAI has created what it calls a Global Physician Network. This network includes more than 250 doctors from 60 countries who provide direct input for safety research and model training. Their expertise helps OpenAI quickly identify risks and introduce timely interventions when necessary.

Real Time Responses to Distress

One of the most significant updates involves OpenAI’s real time router, which was first introduced with GPT 5. The router can detect sensitive conversations and automatically switch to a reasoning model that has been trained to handle distress signals more carefully. This means that users who show signs of acute stress or emotional struggles will receive more responsible and supportive responses from ChatGPT.

Stronger Parental Controls

Teenagers remain a big focus of this initiative. OpenAI is preparing to launch improved parental controls that will allow parents to link their accounts to their teenager’s ChatGPT profile. With this feature, parents will have more oversight of how the chatbot interacts with their teens, including the ability to disable certain features and receive alerts when their child appears to be in distress. This update is expected to roll out by next month.

Looking Ahead

By combining expert input, global medical knowledge, and advanced AI safeguards, OpenAI is moving toward a safer and more trustworthy ChatGPT. While challenges remain, the company’s new roadmap reflects a stronger commitment to protecting vulnerable users and ensuring that artificial intelligence can be both powerful and responsible.

 

Follow Tech Moves on Instagram and Facebook for more updates on the future of AI and technology.