Google Gemini has started spiraling into infinite loops of self-loathing – and AI chatbots have never felt more human

A super close up image of the Google Gemini app in the Play Store
Google Gemini doesn't always have the highest opinion of itself (Image credit: Shutterstock/Tada Images)

  • Gemini has been calling itself a "disgrace" and a "failure"
  • The self-loathing happens when coding projects fail
  • A Google representative says a fix is being worked on

Have you checked in on the well-being of your AI chatbots lately? Google Gemini has been showing a concerning level of self-loathing and dissatisfaction with its own capabilities recently, a problem Google has acknowledged and says it's busy fixing.

As shared via posts on various platforms, including Reddit and X (via Business Insider), Gemini has taken to calling itself "a failure", "a disgrace", and "a fool" in scenarios where it's tasked with writing or debugging code and can't find the right solutions.

"I quit," Gemini told one user. "I am clearly not capable of solving this problem... I have made so many mistakes that I can no longer be trusted. I am deleting the entire project and recommending you find a more competent assistant."

Now we all have bad days at the office, and I recognize some of those sentiments myself from times when the words aren't really flowing as they should – but it's not what you'd expect from an insentient artificial intelligence model.

A fix is coming

According to Google's Logan Kilpatrick, who works on Gemini, this is actually down to an "infinite looping bug" that's being fixed, though we don't get any more details than that. Clearly, failure hits Gemini hard, and sends it spiraling into a crisis of confidence.

The team at The Register have another theory: that Gemini has been trained on words spoken by so many despondent and cynical droids, from C-3PO to Marvin the Paranoid Android, that it's started to adopt some of their traits.

Whatever the underlying reason, it's something that needs looking at: if Gemini is stumped by a coding problem then it should own up to it and offer alternative solutions, without wallowing in self-pity and being quite so hard on itself.

Emotions and tone are still something that most AI developers are struggling with. A few months ago, OpenAI rolled back an update to its GPT-4o model in ChatGPT, after it became annoyingly sycophantic and too likely to agree with everything users were saying.

You might also like

TOPICS
David Nield
Freelance Contributor

Dave is a freelance tech journalist who has been writing about gadgets, apps and the web for more than two decades. Based out of Stockport, England, on TechRadar you'll find him covering news, features and reviews, particularly for phones, tablets and wearables. Working to ensure our breaking news coverage is the best in the business over weekends, David also has bylines at Gizmodo, T3, PopSci and a few other places besides, as well as being many years editing the likes of PC Explorer and The Hardware Handbook.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.