OpenAI responds to furious ChatGPT subscribers who accuse it of secretly switching to inferior models

Sam Altman and a smartphone.
OpenAI and CEO Sam Altman have more issues to face (Image credit: Shutterstock/ Rokas Tenys)

  • New ChatGPT safety rules are frustrating users
  • Sensitive topics get rerouted to different AI models
  • OpenAI has posted a response to user complaints

OpenAI is embroiled in another controversy over AI model switching in ChatGPT, with many paying users furious that they're being rerouted away from their preferred model whenever conversation topics get emotionally or legally sensitive.

There are plenty of threads on Reddit about the issue: in short, ChatGPT introduced new safety guardrails this month, which take users to a separate, more conservative AI model if the chatbot detects that it needs to be extra cautious in its responses.

That's clearly frustrated and annoyed a lot of users, who want to stick with GPT-4o, GPT-5, or whatever they happen to be using – especially those who are paying. There's currently no way to disable this behavior, and it's not particularly clear when the switches happen.

"Adults deserve to choose the model that fits their workflow, context, and risk tolerance," writes one user. "Instead we’re getting silent overrides, secret safety routers and a model picker that's now basically UI theater."

Safety routing

Enough of a fuss has been kicked up that OpenAI executive Nick Turley has weighed in on social media. Turley explains that the new safety routing system is for "sensitive and emotional topics", and works on a per-message and temporary basis.

It's part of a broader effort to improve how ChatGPT responds to signs of mental and emotional distress, as OpenAI has previously explained in a blog post. From the user perspective though, the new rules are taking some getting used to.

OpenAI clearly has a responsibility to look after users who might be vulnerable and who might need extra support from an AI chatbot that isn't quite so expansive and freeform, and in particular young people accessing ChatGPT.

For a lot of users venting their anger online though, it's like being forced to watch TV with the parental controls locked in place, even if there are no kids around. This is likely to be an issue we hear more about in the coming days.

You might also like

TOPICS
David Nield
Freelance Contributor

Dave is a freelance tech journalist who has been writing about gadgets, apps and the web for more than two decades. Based out of Stockport, England, on TechRadar you'll find him covering news, features and reviews, particularly for phones, tablets and wearables. Working to ensure our breaking news coverage is the best in the business over weekends, David also has bylines at Gizmodo, T3, PopSci and a few other places besides, as well as being many years editing the likes of PC Explorer and The Hardware Handbook.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.