ChatGPT is getting better at knowing when you need real human support – and I think it's about time

ChatGPT text
(Image credit: Shutterstock/Bangla press)

If you watched the recent launch of ChatGPT-5 from OpenAI, you’d be forgiven for thinking that it was purely a coding tool. While Sam Altman and his staff did interview one person who used ChatGPT to help understand the medical jargon her doctors were saying to her, the majority of the presentation seemed to be concerned with how great ChatGPT-5 was at writing code.

Out in the real world, however, people use AI and ChatGPT specifically a bit differently. As the outcry from the recent dropping of the old ChatGPT-4o model after the launch of ChatGPT-5 shows, a lot of people use ChatGPT for their mental health, and if you change its personality, it affects them directly. For them, it acts as a mix between a life coach, a therapist, and a friend.

OpenAI seems to be slowly waking up to this fact and the responsibility it bears, and has recently posted an announcement, in which it says, “We sometimes encounter people in serious mental and emotional distress. We wrote about this a few weeks ago and had planned to share more after our next major update. However, recent heartbreaking cases of people using ChatGPT in the midst of acute crises weigh heavily on us, and we believe it’s important to share more now.”

Strengthening safeguards

So, while OpenAI is not announcing anything new just yet, it wants to “explain what ChatGPT is designed to do, where our systems can improve, and the future work we’re planning.”

In a nutshell, OpenAI is working to improve ChatGPT in a few key areas related to its users' health and safety, firstly, by strengthening safeguards in long conversations: “ChatGPT may correctly point to a suicide hotline when someone first mentions intent, but after many messages over a long period of time, it might eventually offer an answer that goes against our safeguards. “

Secondly, it is refining how it blocks content. “We’ve seen some cases where content that should have been blocked wasn’t. These gaps usually happen because the classifier underestimates the severity of what it’s seeing. We’re tuning those thresholds so protections trigger when they should.”

OpenAI is also planning to expand interventions to more people in crisis. “We are exploring how to intervene earlier and connect people to certified therapists before they are in an acute crisis. That means going beyond crisis hotlines and considering how we might build a network of licensed professionals that people could reach directly through ChatGPT. This will take time and careful work to get right.”

Parental controls

Parental monitoring of teens. Curious mother spying daughter, girl messaging on smartphone on sofa at home. Family and modern technologies

(Image credit: Shutterstock)

Another interesting innovation is introducing parental controls. “We will also soon introduce parental controls that give parents options to gain more insight into, and shape, how their teens use ChatGPT. We’re also exploring making it possible for teens (with parental oversight) to designate a trusted emergency contact. That way, in moments of acute distress, ChatGPT can do more than point to resources: it can help connect teens directly to someone who can step in.”

ChatGPT has evolved so far and so quickly that it often feels to me like OpenAI hasn’t really had time to sit down and think about all the implications of its latest innovations before it announces them.

Parental controls should have been an option for all AI chatbots while now, but it’s good that they are finally going to be added. Other AIs, like Copilot, for example, seem to have more guardrails than ChatGPT regarding the types of discussions you can have, but also farm out their parental controls to either the Windows or Apple operating systems.

How OpenAI implements effective parental controls that aren’t easy to circumvent remains to be seen (and is one of the reasons that AIs typically resort to recommending the operating system’s built-in parental controls instead), but I think it’s time for the conversation to start happening.

You might also like

TOPICS
Graham Barlow
Senior Editor, AI

Graham is the Senior Editor for AI at TechRadar. With over 25 years of experience in both online and print journalism, Graham has worked for various market-leading tech brands including Computeractive, PC Pro, iMore, MacFormat, Mac|Life, Maximum PC, and more. He specializes in reporting on everything to do with AI and has appeared on BBC TV shows like BBC One Breakfast and on Radio 4 commenting on the latest trends in tech. Graham has an honors degree in Computer Science and spends his spare time podcasting and blogging.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.