Google Gemini 2.5 just got a new 'Deep Think' mode – and 6 other upgrades

Gemini on a mobile phone.
(Image credit: Shutterstock/JLStock)

  • Google Gemini 2.5 Pro is getting a new Deep Think model
  • Deep Think allows Gemini to consider multiple reasoning paths before responding
  • Deep Think will improve Gemini's accuracy on complex math and code

Google is adding some extra brainpower to Gemini with a new Deep Think Mode. The company unveiled the latest option for Google Gemini 2.5 Pro at Google I/O 2025, showing off just what its AI can do with extra depth.

Deep Think basically augments Gemini's AI 'mind' with additional brains. Gemini in Deep Think mode won't just spit out an answer to a query as fast as possible. Instead, it runs multiple possible lines of reasoning in parallel before deciding how to respond. It’s like the AI equivalent of looking both ways, or rereading the instructions before building a piece of furniture.

And if Google's tests are anything to go by, Deep Think's brainpower is working. It’s performing at a top-tier level on the 2025 U.S. math olympiad, coming out on top in the LiveCodeBench competitive programming test, scoring an amazingly high 84% on the popular MMMU, a sort of decathlon of multimodal reasoning tasks. Deep Think isn’t widely available just yet. Google is rolling it out to trusted testers only for now. But, presumably, once all the kinks are ironed out, everyone will have access to the deepest of Gemini's thoughts.

Gemini shines on

Deep Think fits right into the rest of Gemini 2.5’s growing lineup and the new features arriving for its various models in the API used by developers to embed Gemini in their software.

For instance, Gemini 2.5 Pro now supports native audio generation out. That means it can talk back to you. The speech has an “affective dialogue” feature, which detects emotional shifts in your tone and adjusts accordingly. If you sound stressed, Gemini might stop talking like a patient customer service agent and respond more like an empathetic and thoughtful friend (or at least how the AI interprets such a response). And it will be better at knowing when to talk at all thanks to the new Proactive Audio feature, which filters out background noise so Gemini only chimes in when it’s sure you’re talking to it.

Paired with new security safeguards and the upcoming Project Mariner computer-use features, Gemini 2.5 is trying very hard to be the AI you trust not just with your calendar or code, but with your book narration or entire operating system.

Another element expanding across Gemini 2.5 is what Google calls a 'thinking budget.' Previously unique to Gemini 2.5 Flash, the thinking budget lets developers decide just how deeply the model should think before responding. It's a good way to ensure you get a full answer without spending too much. Otherwise, Deep Think could give you just a taste of its reasoning, or give you the whole thing and make it too expensive for any follow-ups.

In case it's not clear what those thoughts involve, Gemini 2.5 Pro and Flash will offer 'thought summaries' for developers, a document showing the exact details of what the AI was doing in terms of applying information through its reasoning process, so you can actually look inside the AI brain.

All of this signals a pivot from models that just talk fast to emphasizing ones that can reason deeper, if slower. Deep Think is part of that shift toward deliberate, layered reasoning. It’s not just trying to predict the next word anymore, it's applying that logic to ideas and the very process of coming up with answers to your questions. Google seems keen to make Gemini not only able to fetch answers, but to understand the shape of the question itself.

Of course, AI reasoning still exists in a space where a perfectly logical answer might come with a random side of nonsense, no matter how impressive the benchmark scores. But you can start to see the shape of what’s coming, where the promise of an actual 'co-pilot' AI comes to fruition.

You might also like

Eric Hal Schwartz
Contributor

Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.