Google’s Gemini will be right back after these hallucinations: image generator to make a return after historical blunders
Google vows to bring back Gemini after it generates controversial images
Google is gearing up to relaunch its image creation tool that’s part of the newly-rebranded generative artificial intelligence (AI) bot, Gemini, in the next few weeks. The generative AI image creation tool is in theory capable of generating almost anything you can dream up and put into words as a prompt, but “almost” is the key word here.
Google has pumped the brakes on Gemini’s image generation after Gemini was observed creating historical depictions and other questionable images that were considered inaccurate or offensive. However, it looks like Gemini could return to image generation soon, as Google DeepMind CEO Demis Hassabis announced that Gemini will be rebooted in the coming week after taking time to address these issues.
Image generation came to Gemini earlier in February, and users were keen to test its abelites. Some people attempted to generate images depicting a certain historical period that appeared to greatly deviate from accepted historical fact. Some of these users took to social media to share their results and direct criticism at Google.
The images caught many people’s attention and sparked many conversations, and Google has recognized the images as a symptom of a problem within Gemini. The tech giant then chose to take the feature offline and fix whatever was causing the model to dream up such strange and controversial pictures.
Hassabis confirmed that Gemini was not working as intended, and that it would take some weeks to amend it, and bring it back online while speaking at a panel taking place at the Mobile World Congress (MWC) event in Barcelona.
If at first, your generative AI bot doesn't succeed...
Google’s first attempt at a generative AI chatbot was Bard, which saw a lukewarm reception and didn’t win users over from the more popular ChatGPT in the way Google had hoped, after which it changed course and debuted its revamped and rebranded family of generative models, Gemini. Like ChatGPT, Google is now offering a premium-tier for Gemini, which offers advanced features for a subscription.
The examples of Gemini's misadventures have also reignited discussions about AI ethics generally, and Google’s AI ethics specifically, and around issues like the accuracy of generated AI output and AI hallucinations. Companies like Microsoft and Google are pushing ahead to win the AI assistant arms race, but while racing ahead, they’re in danger of releasing products with flaws that could undermine their hard work.
Get daily insight, inspiration and deals in your inbox
Sign up for breaking news, reviews, opinion, top tech deals, and more.
AI-generated content is becoming increasingly popular and, especially due to their size and resources, these companies can (and really, should) be held to a high standard of accuracy. High profile fails like the one Gemini experienced aren’t just embarrassing for Google – it could damage the product’s perception in the eyes of consumers. There’s a reason Google rebranded Bard after its much-mocked debut.
There’s no doubt that AI is incredibly exciting, but Google and its peers should be mindful that rushing out half-baked products just to get ahead of the competition could spectacularly backfire.
YOU MIGHT ALSO LIKE...
Kristina is a UK-based Computing Writer, and is interested in all things computing, software, tech, mathematics and science. Previously, she has written articles about popular culture, economics, and miscellaneous other topics.
She has a personal interest in the history of mathematics, science, and technology; in particular, she closely follows AI and philosophically-motivated discussions.