'People have used technology including AI in self-destructive ways’ claims Sam Altman as OpenAI tries to balance expectations following GPT-5 negative outcry
OpenAI's CEO feels "uneasy"

It's been quite the week for OpenAI and Sam Altman, and now the ChatGPT creator's CEO has taken to X to highlight just how concerned he is about what the GPT-5 launch backlash shows about AI users.
Altman tweeted a long, almost blog-like post on X stating, "If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology."
And he's absolutely right, after all, I saw it first-hand during the 10-hour+ ChatGPT outage in June when users flocked to TechRadar's liveblog, unable to handle dealing with modern life without the help of their trusty AI sidekick.
The GPT-5 launch is the first time, however, we've been able to see the impact of the outcome of an AI model-related business decision on ChatGPT's loyal user base, which Altman said during last week's livestream sits at over 700 million people a week.
Following the launch of GPT-5 the response was not what OpenAI would've been expecting from a landmark release; instead, social media was flooded with users mourning the loss of GPT-4o and with it, some were mourning the loss of a friend.
Altman says OpenAI has 'been closely tracking for the past year or so' and now the CEO has shared his thoughts with the world as the AI pioneers try to balance between improving models and catering to those that use ChatGPT for more than just getting tasks done.
If you have been following the GPT-5 rollout, one thing you might be noticing is how much of an attachment some people have to specific AI models. It feels different and stronger than the kinds of attachment people have had to previous kinds of technology (and so suddenly…August 11, 2025
Into the unknown
In Altman's tweet, he says, "People have used technology, including AI, in self-destructive ways; if a user is in a mentally fragile state and prone to delusion, we do not want the AI to reinforce that."
Sign up for breaking news, reviews, opinion, top tech deals, and more.
And while he emphasizes that "Most users can keep a clear line between reality and fiction or role-play," he highlights "a small percentage cannot."
That small percentage he speaks of was the verbal majority last week on subreddits such as r/ChatGPT and r/OpenAI.
In one post I saw on Reddit, titled "4o saved my life. And now it's being shut down", a ChatGPT user discusses how the AI model filled a void left by their best friend passing away and a breakup.
ChatGPT's memory feature doesn't work across models, so in this case, the user's relationship with AI disappeared overnight, including the personality the chatbot had taken on to interact with humans.
The outcry was so loud from disgruntled ChatGPT users who not only use AI like the scenario mentioned above but also for general daily tasks, that OpenAI responded by reinstating 4o.
The thing is, OpenAI had been advertising GPT-5 as a single model able to adapt to a user's prompt, removing the need for model selection. By reinstating 4o and some of the company's other models, OpenAI is catering to those who rely heavily on ChatGPT, but in turn has softened its stance on streamlining the user experience.
Finding the balance
Altman claims OpenAI doesn't want to "encourage delusion in a user that is having trouble telling the difference between reality and fiction." But his bigger concerns aren't found in the extreme cases but instead in the more subtle ones.
Altman says, "There are going to be a lot of edge cases, and generally we plan to follow the principle of 'treat adult users like adults', which in some cases will include pushing back on users to ensure they are getting what they really want."
He sees the positive impact of ChatGPT in people who use the software for therapy or life coaching, but Altman recognizes that the reliance on ChatGPT and specifically the latest model at any given time could cause addiction and impact users' "longer-term well-being."
While Altman's tweet is insightful into how OpenAI and its CEO view the concerning trends appearing in this newfound AI-dominated world, it's hard not to read his thoughts and think he's preaching from the rafters while benefiting from the chaos.
Altman says he can "imagine a future where a lot of people really trust ChatGPT’s advice for their most important decisions," but the thought of that makes him "uneasy." And while he claims "billions of people may be talking to AI in that way soon," he's aware that "we as in society, but also we as in OpenAI have to figure out how to make it a big net positive."
The ball is in your court, Sam.
You might also like

John-Anthony Disotto is TechRadar's Senior Writer, AI, bringing you the latest news on, and comprehensive coverage of, tech's biggest buzzword. An expert on all things Apple, he was previously iMore's How To Editor, and has a monthly column in MacFormat. John-Anthony has used the Apple ecosystem for over a decade, and is an award-winning journalist with years of experience in editorial.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.