Google is 'reimagining' Android to be all-in on AI – and it looks truly impressive

A photo of the Gemini AI in Android presentation
(Image credit: Future)

Google is looking to "reimagine" Android with its Gemini AI, touting this as a “once in a generation event to reimagine what phones can do”. 

At Google I/O 2024, the search giant said it would bake AI into Android in three ways: putting AI search into Android, making Gemini the new AI assistant, and harnessing on-device AI. 

Translated into everyday speech, this means more AI search tools such as Circle to Search being front and center in Android. The AI-powered tool, which can identify physically circled objects and text in photos and onscreen, will be boosted to tackle more complex problems like graphs and formulae later this year. 

The Gemini AI, which can be found in the Google Pixel 8a right now, will become the AI foundation for Android, bringing multimodal AI – the tech to process, analyze and learn from information and inputs for multiple sources and sensors – to the mobile operating system. All of which makes this one of the bigger AI announcements from Google I/O 2024

In practice, this'll mean Gemini will work in all manner of apps and tools to provide context-aware suggestions, answers and prompts. One example of this was using the AI in the Android Messages app to produce AI-made images to share in chats. Another is the ability to answer questions about a YouTube video a person is watching, and pull data from sources like PDFs to answer very specific queries, such as a particular rule in a sport. 

What's more, Gemini can learn from all this and use that information to predict what a person might want. For instance, from knowing that the user has shown an interest in tennis and in chats about the sport, it could serve up (pun intended) options to find nearby tennis courts.  

The third aspect of AI-ing Android is to ensure a lot of the smart processing can happen on the phone, rather than needing an internet connection. So, Gemini Nano provides a low-latency foundational model for onboard AI processing, with multimodal capabilities; this effectively lets the AI understand more about the context of what's being asked of it and what's going on. 

An example of this in action was how Gemini can detect a fraudulent call looking to scam a person of their bank details, and alert them before fraud can take place. And as this processing takes place on the phone, there's no concern about a remote AI listening in on private conversations. 

Equally, the AI tech can use its contextual understanding to help provide accurate descriptions of what a person with visual impairments is looking at, be it in real life or online. 

In short, Google intends to make an AI-centric Android more helpful and more powerful when it comes to finding things and getting things done. And with Gemini Nano coming with multimodal capabilities to Pixel devices later this year, we can surely expect to see the Google Pixel 9 series be the first phones out of the gate with the reimagined Android. 

You might also like

Managing Editor, Mobile Computing

Roland Moore-Colyer is Managing Editor at TechRadar with a focus on phones and tablets, but a general interest in all things tech, especially those with a good story behind them. He can also be found writing about games, computers, and cars when the occasion arrives, and supports with the day-to-day running of TechRadar. When not at his desk Roland can be found wandering around London, often with a look of curiosity on his face and a nose for food markets.