ChatGPT might not be using the model you think — and it’s also hiding others in settings
There's a whole rotating cast of models working in the background
Sign up for breaking news, reviews, opinion, top tech deals, and more.
You are now subscribed
Your newsletter sign-up was successful
- ChatGPT now uses a simplified model picker that can automatically switch between different AI models
- Users may not realize that different prompts can trigger different underlying models
- Additional models and controls are hidden in settings that most never access
Don't worry, you haven't gone crazy, ChatGPT's model picker (at the top of the screen) is looking a little cleaner this week, with fewer model names cluttering up the interface.
Forget model names like 5.4, 4o and o3, that's a thing of the past! ChatGPT Plus subscribers will now see just three choices, labeled Instant, Thinking, and Pro. However, the suggestion of transparency over complexity isn't quite what it seems.
The update changes the nature of ChatGPT options from a choice of models to more of a broad style request. The actual model used in an answer will more often be decided by ChatGPT based on your prompt complexity and other settings.
Article continues belowThose factors will affect whether ChatGPT's answers are from a faster, more lightweight model or a more powerful, power-hungry LLM. You might not even be told which one handled your request in the result.
It puts control of ChatGPT a step beyond the user. Selecting a ChatGPT model used to mean just that. Now, selecting any of the three modes might correspond to any of ChatGPT's stable of models, depending on other elements.
You might get answers almost instantly in short, conversational form. Or there might be a pause and a longer, more structured answer. That difference is not just tone. It reflects how much computational effort the system has decided to spend.
Model switching
The change isn't random; it helps OpenAI solve a real problem. Though powerful, the most advanced AI models are also slower and more expensive to run. Using them for every single request would make your experience of using ChatGPT sluggish and costly.
Sign up for breaking news, reviews, opinion, top tech deals, and more.
OpenAI can deliver something that feels both fast and capable by mixing up the models.
On the other hand, it makes for less predictable results. Two people using ChatGPT at the same time may not be using the same underlying system, even if their screens look identical.
One person might be routed to a lighter model, another to a heavier one, based on subtle differences in their prompts or usage patterns. The result is an experience that can vary in ways that are hard to explain from the outside.
You might not even know about all of the models available within ChatGPT anymore. They've been hidden in the configure menu, tucked away in settings that most people never open.
That's where the legacy models can still be accessed and where automatic switching can be turned off. You can even adjust how much effort the system applies when reasoning.
Take back control
For casual users, this probably does not matter much. The experience feels smoother. You type something, you get an answer, and it generally works. The system takes care of the details.
For more attentive users, the change might feel slightly disorienting. If ChatGPT suddenly seems less detailed or more hesitant, it might be that it switched to a different model without telling you. In some cases, usage limits can trigger this kind of shift, quietly downgrading the level of reasoning applied to your prompts.
That creates a small but noticeable gap between expectation and reality. Many people assume they are interacting with a single, consistent intelligence. In truth, they are interacting with a flexible system that constantly adjusts itself.
This approach works because it removes friction. Most people do not want to manage settings or learn the nuances of different models. They want results. But when you no longer choose the model directly, you also give up some influence over how your answer is generated. That's not relevant for everyday questions, but it can matter when precision or depth becomes important.
If you want to ensure the model you're using then click on the model selector menu then look in the Configure setting. Inside there you'll see a Model selector.
The next phase of AI may not be about choosing the smartest model. It may be about understanding when and why the system is choosing for you. And if you want to take back control, the answer is sitting in a settings menu that most people never open.
Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!
And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

➡️ Read our full guide to the best business laptops
1. Best overall:
Dell Precision 5690
2. Best on a budget:
Acer Aspire 5
3. Best MacBook:
Apple MacBook Pro 14-inch (M4)

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.