Bing’s ChatGPT brain is behaving so oddly that Microsoft may rein it in

White cyborg finger about to touch human finger on city background 3D rendering
(Image credit: Sdecoret via Shutterstock)

Microsoft launched its new Bing search engine last week and introduced an AI-powered chatbot to millions of people, creating long waiting lists of users looking to test it out, and a whole lot of existential dread among sceptics. 

The company probably expected some of the responses that came from the chatbot to be a little inaccurate the first time it met the public, and had put in place measures to stop users that tried to push the chatbot to say or do strange, racist or harmful things. These precautions haven’t stopped users from jailbreaking the chatbot anyway, and having the bot use slurs or respond inappropriately. 

While it had these measures in place, Microsoft wasn’t quite ready for the very strange, bordering unsettling, experiences some users were having after trying to have more informal, personal conversations with the chatbot. This included the Chatbot making things up and throwing tantrums when called out on a mistake or just having a full on existential crisis.

In light of the bizarre responses, Microsoft is considering putting in new safeguarding protocols and tweaks to curtail these strange, sometimes too-human responses. This could mean letting users restart conversations or giving them more control over tone. 

Microsoft's chief technology officer told The New York Times it was also considering cutting the lengths of conservations users can have with the chatbot down before the conversation can enter odd territory. Microsoft has already admitted that long conversations can confuse the chatbot, and can pick up on users' tone which is where things might start going sour. 

In a blog post from the tech giant, Microsoft admitted that its new technology was being used in a way it “didn’t fully envision”. The tech industry seems to be in a mad dash to get in on the artificial intelligence hype in some way, which proves how excited the industry is about the technology. Perhaps this excitement has clouded judgement and put speed over caution. 


Analysis: The bot is out of the bag now

Releasing a technology as unpredictable and full of imperfections was definitely a risky move by Microsoft to incorporate AI into Bing in an attempt to revitalise interest in its search engine. It may have set out to create a helpful chatbot that won’t do more than it’s designed to do, such as pull up recipes, help people with puzzling equations, or find out more about certain topics, but it’s clear it did not anticipate how determined and successful people can be if they wish to provoke a specific response from the chatbot. 

New technology, particularly something like AI, can definitely make people feel the need to push it as far as it can go, especially with something as responsive as a chatbot. We saw similar attempts when Siri was introduced, with users trying their hardest to make the virtual assistant angry or laugh or even date them. Microsoft may not have expected people to give the chatbot such strange or inappropriate prompts, so it  wouldn’t have been able to predict how bad the responses could be.

Hopefully the newer precautions will curb any further strangeness from the AI powered chatbot and take away the uncomfortable feelings when it felt a little too human. 

It’s always interesting to see and read about ChatGPT, particularly when the bot spirals towards insanity after a few clever prompts, but with a technology so new and untested, nipping problems in the bud is the best thing to do. 

There’s no telling whether the measures Microsoft plans to put in place will actually make a difference, but since the chatbot is already out there, there’s no taking it back. We just have to get used to patching up problems as they come, and hope anything potentially harmful or offensive is caught in time. AI's growing pains may only just have begun.

TOPICS
Muskaan Saxena
Computing Staff Writer

Muskaan is TechRadar’s UK-based Computing writer. She has always been a passionate writer and has had her creative work published in several literary journals and magazines. Her debut into the writing world was a poem published in The Times of Zambia, on the subject of sunflowers and the insignificance of human existence in comparison. Growing up in Zambia, Muskaan was fascinated with technology, especially computers, and she's joined TechRadar to write about the latest GPUs, laptops and recently anything AI related. If you've got questions, moral concerns or just an interest in anything ChatGPT or general AI, you're in the right place. Muskaan also somehow managed to install a game on her work MacBook's Touch Bar, without the IT department finding out (yet).

Read more
Bored frustrated business people working in the office with an efficient robot.
Shut it all down? Microsoft research suggests AI usage is making us feel dumber – but you don't need to panic yet
DeepSeek
What is DeepSeek: China’s AI has got people talking
ChatGPT app on an iPhone
ChatGPT and Google Gemini are terrible at summarizing news, according to a new study
AI Learning for kids
AI doesn't belong in the classroom unless you want kids to learn all the wrong lessons
A hand reaching out to touch a futuristic rendering of an AI processor.
What are AI Hallucinations? When AI goes wrong
DeepSeek
Don't get too attached to DeepSeek – it'll never survive in the US
Latest in Artificial Intelligence
AI writing
ChatGPT just wrote the most beautiful short story, and I wonder what I'm even doing here
ChatGPT
ChatGPT wants to write your next novel, and readers and writers alike should be very worried
Apple products with Apple Intelligence against a white background
Apple rushed Apple Intelligence and now the company is stuck playing catch up
Deep Resarch
I test AI agents for a living and these are the 5 reasons you should let tools like ChatGPT Deep Research get things done for you
ChatGPT vs. Manus
I compared Manus AI to ChatGPT – now I understand why everyone is calling it the next DeepSeek
Two business men playing chess in the office.
It turns out ChatGPT o1 and DeepSeek-R1 cheat at chess if they’re losing, which makes me wonder if I should trust AI with anything
Latest in News
Elayne, Egwene, and Nynaeve dressed regally and on horseback in The Wheel of Time season 3
'There's a reason why we do it': The Wheel of Time showrunner responds to fans who are still upset over the Prime Video show's plot alterations
Google Pixel 9
Android 16 could bring an improved Samsung DeX-style desktop mode to more phones
An Nvidia GeForce RTX 4060 Ti
Nvidia could unleash RTX 5060 and 5060 Ti GPUs on PC gamers tomorrow, but there’s no sign of rumored RTX 5050 yet
AI writing
ChatGPT just wrote the most beautiful short story, and I wonder what I'm even doing here
Abstract image of robots working in an office environment including creating blueprint of robot arm, making a phone call, and typing on a keyboard
This worrying botnet targets unsecure TP-Link routers - thousands of devices already hacked
Windows 10 button on a keyboard
Microsoft’s Remote Desktop app becomes the Windows App