The new Bing AI ChatGPT bot is going to be limited to five replies per chat

The Microsoft Bing search engine on the web with an example query
Bing's AI search isn't working perfectly yet (Image credit: Microsoft)

As regular TechRadar readers will know, the heavily promoted AI chatbot enhancements recently added to Bing haven't had the smoothest of launches – and now Microsoft is making some changes to improve the user experience.

In a blog post (via The Verge), Microsoft says the tweaks should "help focus the chat sessions": the AI part of Bing is going to be limited to 50 chat 'turns' (a question and answer) per day, and five responses per chat session.

This has been coming: Microsoft executives have previously gone on record saying that they were looking into ways of cutting out some of the weird behavior that's been noticed by early testers of the AI bot service.

Put to the test

Those early testers have been testing pretty hard: they've been able to get the bot, based on an upgraded version of OpenAI's ChatGPT engine, to return inaccurate answers, get angry, and even question the nature of its own existence.

Having your search engine go through an existential crisis when you were just looking for a list of the best phones isn't ideal. Microsoft says that very long chat sessions get its AI confused, and that the "vast majority" of search queries can be answered in 5 responses.

The AI add-on for Bing isn't available for everyone yet, but Microsoft says its working its way through the waiting list. If you're planning on trying out the new functionality, remember to keep your interactions brief and to the point.


Analysis: don't believe the hype just yet

Despite the early problems, there's clearly a lot of potential in the AI-powered search tools in development from Microsoft and Google. Whether you're searching for ideas for party games or places to visit, they're capable of returning fast, informed results – and you don't have to wade through pages of links to find them.

At the same time, there's clearly still a lot of work to do. Large Language Models (LLMs) like ChatGPT and Microsoft's version of it aren't really 'thinking' as such. They're like supercharged autocorrect engines, predicting which words should go after each other to produce a coherent and relevant response to what's being asked of them.

On top of that, there's the question of sourcing – if people are going to rely on AI to tell them what the best laptops are and put human writers out of a job, these chat bots won't have the data they need to produce their answers. Like traditional search engines, they're still very much dependent on content put together by actual people.

We did of course take the opportunity to ask the original ChatGPT why long interactions confuse LLMs: apparently it can make the AI models "too focused on the specific details of the conversation" and cause it to "fail to generalize to other contexts or topics", leading to looping behavior and responses that are "repetitive or irrelevant".

David Nield
Freelance Contributor

Dave is a freelance tech journalist who has been writing about gadgets, apps and the web for more than two decades. Based out of Stockport, England, on TechRadar you'll find him covering news, features and reviews, particularly for phones, tablets and wearables. Working to ensure our breaking news coverage is the best in the business over weekends, David also has bylines at Gizmodo, T3, PopSci and a few other places besides, as well as being many years editing the likes of PC Explorer and The Hardware Handbook.