Here's how ChatGPT parental controls will work, and it might just be the AI implementation parents have been waiting for
OpenAI is adding mental health alerts and account linking to families navigating the emotional complexity of growing up with AI

- OpenAI is introducing parental controls to ChatGPT
- Parents will be able to link accounts, set feature restrictions, and receive alerts if their teen shows signs of emotional distress
- Sensitive ChatGPT conversations will also be routed through more cautious models trained to respond to people in crisis
OpenAI is implementing safety upgrades to ChatGPT designed to protect teenagers and people dealing with emotional crises. The company announced plans to roll out parental controls that will let parents link their accounts to the accounts of their kids starting at age 13. They'll be able to restrict features, and will receive real-time alerts if the AI detects problematic messages that could indicate depression or other distress.
The update shows that OpenAI is not going to deny that teens are using ChatGPT, and that they are sometimes treating the AI like a friend and confidant. Though there's no direct mention, it also feels like a response to some recent high-profile instances of people claiming that interacting with an AI chatbot led to the suicide of a loved one.
The new controls will begin rolling out in the next month. Once set up, parents can decide whether the AI chatbot can save chat history or use its memory feature. It will also have age-appropriate content guidelines on by default to govern how the AI responds. If a flagged conversation happens, parents will receive a notification. It’s not universal surveillance, as otherwise parents won't get any notice of the conversations, but the alerts will be deployed in moments where it seems a real-world check-in might matter most.
"Our work to make ChatGPT as helpful as possible is constant and ongoing. We’ve seen people turn to it in the most difficult of moments," OpenAI explained in a blog post. "That’s why we continue to improve how our models recognize and respond to signs of mental and emotional distress, guided by expert input."
Emotionally safe models
For adults and teens, OpenAI says it will begin routing sensitive conversations that involve mental health struggles or suicidal ideation through a specialized version of ChatGPT's model. The model employs a method called deliberative alignment to respond more cautiously, resist adversarial prompts, and stick to safety guidelines.
To make the new safety system function, OpenAI has created the Expert Council on Well-Being and AI and the Global Physician Network that includes over 250 medical professionals specializing in mental health, substance use, and adolescent care. These advisors will help shape how distress is detected, how the AI responds, and how escalations should work in moments of real-world risk.
Parents have long worried about screen time and online content, but AI introduces a new layer: not just what your child sees, but who they talk to. When that "who" is an emotionally sophisticated large language model that sounds like it cares despite being just an algorithm, things get even more complicated.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
AI safety has mostly been reactive until now, but the new tools push AI into being more proactive in preventing damage. Hopefully, that means it won't usually need to be a dramatic text to a parent and a plea from the AI for a teen to consider their loved ones. It might be awkward or resented, but if the new features can steer a conversational cry for help away from the cliff's edge, that's not a bad thing.
You might also like

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.