Google Gemini Live is the first AI that almost encourages you to be rude
Interrupt your AI sidekick whenever you want - it can multitask
Google has made its Gemini AI assistant a little more human today by letting you interrupt or switch topics mid-conversation. The tech giant announced the release of the long-promised Gemini Live for mobile devices at its Made by Google 2024 event. Instead of the specific commands common to Google Assistant or Alexa, Gemini Live will respond to casual language and can even simulate speculation and brainstorming. The idea is to make conversations with the AI feel more natural.
Gemini Live is a bit like being on the phone with a really fast personal assistant. The AI can talk and complete tasks at the same time. The multitasking is currently available to Gemini Advanced subscribers on Android devices, but Google said it will expand to iOS soon. The personal choices extend to what Gemini sounds like, too, with 10 new voice options of varying styles. Google claims the upgraded speech engine involved also delivers more emotionally expressive and realistic interactions.
Despite similarities, Gemini Live isn't just Google's version of OpenAI's ChatGPT Advanced Voice Mode. ChatGPT in Voice Mode can struggle with long-term conversations. Gemini Live is built with a larger context window, making it better at remembering what you said a little earlier.
Gemini Live forever
Google also unveiled a longer list of Gemini extensions, integrating the AI more deeply with Google’s suite of apps and services. Upcoming extensions will include integrations with Google Keep, Tasks, and expanded features on YouTube Music. The company described how you could ask Gemini Live for to retrieve a recipe from Gmail and add the ingredients to a shopping list in Keep, or create a playlist of songs from a specific era using YouTube Music. This level of integration allows Gemini to interact more seamlessly with the apps and content on a user’s device, offering assistance that is tailored to the context of their activities.
Still, Gemini Live isn't quite where the demo at Google I/O 2024 suggested it would be. The visual processing showcased there is still in the future. Those will allow Gemini to see and respond to users’ surroundings via photos and video taken with the mobile device. That could significantly expand the utility of Gemini Live. The AI assistant's new features fit well with Google's efforts to integrate Gemini into every part of your life. Google's vision is a conversation with Gemini that never ends.
You might also like
- Gemini Live's background mode and app extensions could blow Apple Intelligence away
- What is Google Gemini? Everything you need to know about Google’s next-gen AI
- Gemini's next evolution could let you use the AI while you browse the internet
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
Eric Hal Schwartz is a freelance writer for TechRadar with more than 15 years of experience covering the intersection of the world and technology. For the last five years, he served as head writer for Voicebot.ai and was on the leading edge of reporting on generative AI and large language models. He's since become an expert on the products of generative AI models, such as OpenAI’s ChatGPT, Anthropic’s Claude, Google Gemini, and every other synthetic media tool. His experience runs the gamut of media, including print, digital, broadcast, and live events. Now, he's continuing to tell the stories people want and need to hear about the rapidly evolving AI space and its impact on their lives. Eric is based in New York City.