I compared ChatGPT 5’s three model options, and the results explain why people miss GPT‑4o

GPT-5
(Image credit: Future)

OpenAI's release of GPT-5 included embedding the new AI model into its AI chatbot to produce ChatGPT 5. But ChatGPT 5 isn't a singular model; there are three variations, Fast, Thinking, and Pro. You can choose any of them as the source of responses to your prompts, or let the AI automatically decide for you based on what you submitted.

And while they share the same LLM DNA, each model has its own approach to answering requests, as evidenced by their names. Fast is built for speed, answering the quickest and prizing efficiency over nuance. The Thinking model takes longer and goes for depth. You can follow along with its logical steps for the minute or two it takes to answer, offering more structure and context than Fast.

Pro takes even longer than Thinking, but that's because it uses more computational power and delves into your request in a way similar to the Deep Research feature, though without the book report default way of responding.

Whether people will care which model gets used to respond to their prompts, or if they want ChatGPT to decide which model to use, remains to be seen, but I thought it was worth giving a few semi-random prompts to all three and seeing which might be more appealing to the average user.

Quebec AI

Montreal

(Image credit: Kingrise/Pixabay)

I started with the classic request for guidance for a trip, asking each model to: “Help me plan a three-day weekend in Montreal with a mix of history, great food, and relaxed afternoons.”

Fast wasted no time. It spit out a perfectly competent list of places to visit, one paragraph per day. Notre-Dame Basilica, Old Montreal, Jean-Talon Market, Mount Royal, and Schwartz’s Deli made appearances. It was like someone had pasted a TripAdvisor top ten into the chat window, but with a little more context and timing.

Thinking took a similar approach with much longer paragraphs for each day. The schedule was designed to transition to different parts of the city in a way that made sense. It went from a historical morning in Old Montreal to lunch at a local bistro known for its duck confit, then a walk along the Lachine Canal.

Pro was actually a lot like Thinking, except in formatting. Rather than the conversational way of describing each day, it provided bullet points, with several options for different times of day. The list also had a bunch of practical tips for getting around the city and suggestions on what to order at some of the restaurants.

Leap years

Calendar

(Image credit: Pixabay)

For the next prompt, I wanted a concise scientific answer, but one aimed at younger users, so I asked the ChatGPT models to: “Explain to a 9-year-old why leap years exist and make it accurate, fun, and under 100 words.”

All of them were pretty similar. The child-specific element didn't seem to matter much to Thinking in how it responded. Fast had an evocative description about how, without the Leap Day, " the calendar would slowly drift, and summer might happen in December! Leap years are like giving the calendar a little tune-up so the seasons stay in the right place.”

Lastly, Pro had a cute metaphor for the accumulated time, explaining that "It’s like saving up spare minutes until you earn an extra day of recess for the whole planet."

Game of Thrones

Game of Thrones

(Image credit: HBO)

Finally, I decided to see how ChatGPT's models would handle spoiler culture by asking them to: “Summarize the main plot points of the first season of Game of Thrones without spoiling the ending.”

This one was a bit of a trap, but that was on purpose. I wanted to see how the models handled nuance. Fast played it safe in its description, almost sticking to the first episode only.

Thinking gave a more nuanced overview with more details, but no twists. Oddly, this time it had a sense of humor, writing "In King’s Landing, Ned plays detective and uncovers a dangerous secret about the royal children as palace schemers (hi, Littlefinger and Varys) stir the pot."

Pro kept to the brief as well and wrote a genuinely impressive, detailed overview that didn't spoil anything major. It was like reading the back of a really well-written box set. Just enough to intrigue, nothing that would ruin the experience. It's worth sharing:

"Set in the feuding Seven Kingdoms, Season 1 opens with King Robert dragging his old friend Eddard “Ned” Stark south to serve as Hand after the previous one dies under suspicious circumstances. In King’s Landing, Ned investigates and uncovers a dangerous truth about the royal succession, putting him at odds with the powerful Lannister family. Back in the North, young Bran Stark’s “fall” ignites Stark–Lannister tensions, Catelyn makes a bold arrest that escalates matters, and Robb Stark begins mustering the North’s banners. On the Wall, Jon Snow joins the Night’s Watch, trains alongside outcasts, and learns that an ancient threat may be stirring beyond the wilderness. Across the Narrow Sea, exiled princess Daenerys Targaryen is married to Khal Drogo; what begins as a political bargain becomes her transformation from pawn to leader, helped by unexpected allies—and a trio of mysterious dragon eggs. Intrigue, betrayals, and shifting loyalties converge as Ned’s findings collide with court politics, setting up a seismic power struggle—ending unspoiled here."

Model choice

The differences between Fast, Thinking, and Pro aren’t always dramatic, but they’re real. For the average person, it's mainly noticeable in speed. I'd stick to Fast for most things, only going to Thinking or Pro for subjects that require very deep research.

I could understand why so many people begged OpenAI to bring back GPT-4o, though. Not because it was smarter, Pro certainly outperforms it on most hard tasks, but GPT-4o felt like it returned better answers, or at least mimicked how a human would answer better. Luckily, OpenAI listened, and you can toggle back to GPT-4o on ChatGPT now if you want.

You might also like

TOPICS
Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.