The 13 biggest announcements from Google I/O 2025

GoogleIO
(Image credit: Google)

Want proof that Google really has gone all-in on AI? Then look no further than today's Google I/O 2025 keynote.

Forget Android, Pixel devices, Google Photos, Maps and all the other Google staples – none were anywhere to be seen. Instead, the full two-hour keynote spent its entire time taking us through Gemini, Veo, Flow, Beam, Astra, Imagen and a bunch of other tools to help you navigate the new AI landscape.

There was a lot to take in, but don't worry – we're here to give you the essential round-up of everything that got announced at Google's big party. Read on for the highlights.

1. Google Search got its biggest AI upgrade yet 

A phone on a green and blue background showing AI Mode next to a laptop showing Deep Search

(Image credit: Google)

‘Googling’ is no longer the default in the ChatGPT era, so Google has responded. It’s launched its AI Mode for Search (previously just an experiment) to everyone in the US, and that’s just the start of its plans.

Within that new AI Mode tab, Google has built several new Labs tools that it hopes will stop us from jumping ship to ChatGPT and others.

A ‘Deep Search’ mode lets you set it working on longer research projects, while a new ticket-buying assistant (powered by Project Mariner) will help you score entry to your favourite events.

Unfortunately, the less popular AI Overviews is also getting a wider rollout, but one thing’s for sure: Google Search is going to look and feel very different from now on.


2. Google just made shopping more fun

@techradar

♬ original sound - TechRadar

Shopping online can go from easy to chaotic in moments, given the huge amount of brands, retailers, sellers and more – but Google is aiming to use AI to streamline the process.

That's because the aforementioned AI Mode for Search now offers a mode that will react to shopping-based prompts, such as ‘I’m looking for a cute purse’ and serve up products and images for inspiration and allow users to narrow down large ranges of products; that is if you live in the US as the mode is rolling out there first.

The key new feature in the AI-powered shopping experience is a try-on mode that lets you upload a single image of yourself, from which Google’s combination of its Shopping Graph and Gemini AI models will then enable you to virtually try on clothes.

The only caveat here is the try-on feature is still in the experimental stage and you need to opt-in to the ‘Search Labs’ program to give it a go.

Once you have the product or outfit in mind, Google’s agentic checkout feature will basically buy the product on your behalf, using the payment and delivery details stored in Google Pay; that is, if the price meets your approval – as you can set the AI tech to track the cost of a particular product and only have it buy it if the price is right. Neat.


3. Beam could reinvent video calls

Google Beam

(Image credit: Google)

Video calls are the bane of many people's lives, particularly if you work in an office and spend 60% of your time in such calls. But Google's new Beam could make them a lot more interesting.

The idea here is to present calls in 3D, as if you're in the same room as someone when you're on a call with them; a bit like with VR. However, there's no need for a VR headset or glasses here, with Beam instead using cameras, mics, and – of course – AI to work its magic.

If that all sounds rather familiar, it's because Google has teased this before, under the name Project Starline. But this is no longer a far away concept as it's here, and almost ready for people to use.

The caveat is that both callers will need to sit in a custom-made booth that can generate the 3D renders that are needed. But it's all pretty impressive nonetheless, and the first business customers will be able to get the kit from HP later in 2025.


4. Veo 3 just changed the game for AI video

AI video generation tools are already incredibly impressive, given they didn't even exist a year or two ago, but Google new Veo 3 model looks like taking things to the next level.

As with the likes of Sora and Pika, the tool's third-generation version can create video clips and then tie them together to make longer films. But unlike those other tools, it can also generate audio at the same time – and expertly sync sound and vision together.

Nor is this capability limited to sound effects and background noises, because it can even handle dialogue – as demonstrated in the clip above, which Google demoed in its I/O 2025 keynote.

"We’re emerging from the silent era of video generation," said Google DeepMind CEO Demis Hassabis – and we're not going to argue with that.


5. Gemini Live is here – and it's free

Google Gemini Live on S25 Edge

(Image credit: Future)

Google Gemini Live, the search giant’s AI-powered voice assistant, is now available for free on both Android and iOS. Previously a paid-for option, this move opens up the AI to a wealth of users.

With Gemini Live, you can talk to the generative AI assistant using natural language, as well as use your phone camera to show it things from which it’ll extract information to serve up related data. Plus, the ability to share one’s phone screen and camera with other Android users via Gemini Live has now been extended to compatible iPhones.

Google will start rolling out Gemini Live for free from today, with iOS users being able to access the AI and its screen sharing features in the coming weeks.


6. Flow is an awesome new AI filmmaking tool

Cinematic images created by Google Flow

(Image credit: Google)

Here's one for all the budding movie directors out there: at I/O 2025, Google took the covers off Flow, an AI-powered tool for filmmakers that can create scenes, characters and other movie assets from a natural language text prompt.

Let’s say you want to see doctors perform an operation in the back of a 1070s taxi; well, pop that into Flow and it’ll generate the scene for you, using the Veo 3 model, with surprising realism.

Effectively an extension of the experimental Google Labs VideoFX tool launched last year, Flow will be available for subscribers to Google Al Pro and Google Al Ultra plans in the US, with more countries to come.

And it could be a tool that’ll let budding directors and cinematic video makers more effectively test scenes and storytelling, without needing to shoot a lot of clips.

Whether this will enhance filmmaking planning or yield a whole new era of cinema, where most scenes are created using generative AI rather than making use of sets and traditional CGI, has yet to be seen. But it looks like Flow could open up movie making to more than just keen amateurs and Hollywood directors.


7. Gemini's artistic abilities are now even more impressive

Woman generated by Imagen 4

(Image credit: Google)

Gemini is already a pretty good choice for AI image generation; depending on who you ask, it's either slightly better or slightly worse than ChatGPT, but essentially in the same ballpark.

Well, now it might have moved ahead of its rival, thanks to a big upgrade to its Imagen model.

For starters, Imagen 4 brings with it a resolution boost, to 2K – meaning you'll be better able to zoom into and crop its images, or even print them out.

What's more, it'll also have "remarkable clarity in fine details like intricate fabrics, water droplets and animal fur, and excels in both photorealistic and abstract styles”, Google says – and judging by the image above, that looks pretty spot on.

Finally, Imagen 4 will give Gemini improved abilities at spelling and typography, which has bizarrely remained one of the hardest puzzles for AI image generators to solve so far. It's available from today, so expect even more AI-generated memes in the very near future.


8. Gemini 2.5 Pro just got a ‘groundbreaking new ‘Deep Think’ upgrade

Gemini on a mobile phone.

(Image credit: Shutterstock/JLStock)

Enhanced image capabilities aren't the only upgrades coming to Gemini, either – it's also got a dose of extra brainpower with the addition of a new Deep Think Mode.

This basically augments Gemini 2.5 Pro with a function that means it’ll effectively think harder about queries posed at it, rather than trying to kick out an answer as quickly as possible.

This means the latest pro version of Gemini will run multiple possible lines of reasoning in parallel, before deciding on how to respond to a query. You could think of it as the AI looking deeper into an encyclopaedia, rather than winging it when coming up with information.

There is a catch here, in that Google is only rolling out Deep Think Mode to trusted testers for now – but we wouldn't be surprised if it got a much wider release soon.


9. Gemini AI Ultra is Google’s new ‘VIP’ plan for AI obsessives

Gemini on a mobile phone

(Image credit: Shutterstock/Sadi Santos)

Would you spend $3,000 a year on a Gemini subscription? Google thinks some people will, because it's rolled out a new Gemini AI Ultra plan in the US that costs a whopping $250 a month.

The plan isn't aimed at casual AI users, obviously; Google says it offers "the highest usage limits and access to our most capable models and premium features" and that it'll be a must if "you're a filmmaker, developer, creative professional or simply demand the absolute best of Google Al with the highest level of access."

On the plus side, there's a 50% discount for the first three months, while the previoiusly available Premium plan also sticks around for $19.99 a month, but now renamed to AI Pro. If you like the sound of AI Ultra, it will be available in more countries soon.


10. Google just showed us the future of smart glasses

A variety of Android XR headsets and glasses users in a collage

(Image credit: Google)

Google finally gave us the Android XR showcase it has been teasing for years.

At its core is Google Gemini – on-glasses-Gemini can find and direct you towards cafes based on your food preferences, it can perform live translation, and find answers to questions about things you can see. On a headset, it can use Google Maps to transport you all over the world.

Android XR is coming to devices from Samsung, Xreal, Warby Parker, and Gentle Monster, though there’s no word yet on when they’ll be in our hands.


11. Project Astra also got an upgrade

Google IO 2025

(Image credit: Future)

Project Astra is Google’s powerful mobile AI assistant that can react and respond to the user’s visual surroundings, and this year’s Google I/O has given it some serious upgrades.

We watched as Astra gave a user real-time advice to help him fix his bike, speaking in natural language. We also saw Astra argue against incorrect information as a user walked down the street mislabeling the things around her.

Project Astra is coming to both Android and iOS today, and its visual recognition function is also making its way to AI Mode in Google Search.


12. …As did Chrome

Google IO 2025

(Image credit: Future)

Is there anything that hasn’t been given an injection of Gemini’s AI smarts? Google’s Chrome browser was one of the few tools that hadn’t it seems, but that’s now changed.

Gemini is now rolling out in Chrome for desktop from tomorrow to Google AI Pro and AI Ultra subscribers in the US.

What does that mean? You’ll apparently now be able to ask Gemini to clarify any complex information that you’re researching, or get it to summarize web pages. If that doesn’t sound too exciting, Google also promised that Gemini will eventually work across multiple tabs and also navigate websites “on your behalf”.

That gives us slight HAL vibes (“I’m sorry, Dave, I’m afraid I can’t do that”), but for now it seems Chrome will remain dumb enough for us to be considered worthy of operating it.

13. …And so did Gemini Canvas

As part of Gemini 2.5, Canvas – the so-called ’creative space inside the Gemini app – has got a boost via the new upgraded AI models in this new version of Gemini.

This means Canvas is more capable and intuitive, with the tool able to take data and prompts and turn them into infographics, games, quizzes, web pages and more within minutes.

But the real kicker here is that Canvas can now take complex ideas and turn them into working code at speed and without the user needing to know specific coding languages; all they need to do is describe what they want in the text prompt.

Such capabilities open up the world of ‘vibe coding’, where one can create software without needing to know any programming languages, and it also has the capability of prototyping new ideas for apps at speed and just through prompts.


You might also like

Marc McLaren
Global Editor in Chief

Marc is TechRadar’s Global Editor in Chief, the latest in a long line of senior editorial roles he’s held in a career that started the week that Google launched (nice of them to mark the occasion). Prior to joining TR, he was UK Editor in Chief on Tom’s Guide, where he oversaw all gaming, streaming, audio, TV, entertainment, how-to and cameras coverage. He's also a former editor of the tech website Stuff and spent five years at the music magazine NME, where his duties mainly involved spoiling other people’s fun. He’s based in London, and has tested and written about phones, tablets, wearables, streaming boxes, smart home devices, Bluetooth speakers, headphones, games, TVs, cameras and pretty much every other type of gadget you can think of. An avid photographer, Marc likes nothing better than taking pictures of very small things (bugs, his daughters) or very big things (distant galaxies). He also enjoys live music, gaming, cycling, and beating Wordle (he authors the daily Wordle today page).

With contributions from

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.