'Current LLMs introduce substantial errors when editing work documents': Microsoft scientists find most AI models struggle with long-running tasks — so maybe don't trust them completely just yet

A person typing on a laptop and using a tablet. Only their upper torso, arms and hands are visible. Text superimposed on the image shows AI
(Image credit: Getty Images)

  • Microsoft researchers determine that current LLMs aren't good at long-running tasks
  • More interactions and less structure significantly reduce benchmark performance
  • "Python is the only domain where most models are ready"

New research from a trio of Microsoft workers has uncovered a fundamental issue that could be blocking effective agentic AI -namely that most AI models can't actually reliably handle long-running workflows.

To quantify their findings, the researchers introduced a new DELEGATE-52 benchmark to provide metrics across 52 sectors, including coding, accounting, science and more.

Ultimately, the paper concluded current LLMs "introduce sparse but severe errors that silently corrupt documents, compounding over long interaction."

Latest Videos From

AI isn't that good at long-running tasks, yet

The study goes into some of the latest AI models including Gemini 3.1 Pro, Claude 4.6 Opus and GPT-5.4. It found that even they "corrupt an average of 25% of document content by the end of long workflows," with lesser models even more likely to get things wrong.

The DELEGATE-52 benchmark uses real documents at around 15K tokens in length and introduced 5-10 complex editing tasks with a "round-trip relay simulation" that asks AI to perform a transformation then reverse it. This allows the researchers to measure how effectively each model reconstructs the documents back to their original forms.

Highly structured and programmatic areas were where the models performed best, with the Microsoft researchers concluding that "Python is the only domain where most models are ready." Conversely, natural language workflows, creative areas and semi-structured documents saw model models struggle.

The paper also uncovers that, the longer the token length, the more likely an AI model is to struggle.

Where frontier models differed was not in their ability to eliminate errors – just that they were able to delay errors. Some of the other models tested by Microsoft's researchers included a number of GPT-5 and GPT-4 generations, Claude options, Gemini models and one each from Mistral, xAI and Moonshot – totalling 19 different models from six families.

Gemini 3.1 Pro took first place with a DELEGATE-52 benchmark score of 80.9% after 20 interactions; Claude 4.6 Opus (73.1%) and GPT-5.4 (71.5%) round out the top three, and GPT 5 Nano (10.0%) falls into last place.

In short, the paper concludes that today's AI models are not reliable enough to be trusted for long-running, autonomous workflows, highlighting key areas where model developers must focus on in the future and offering up yet another benchmark to determine model capability.

Via The Register


Google logo on a black background next to text reading 'Click to follow TechRadar'

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds.

TOPICS

With several years’ experience freelancing in tech and automotive circles, Craig’s specific interests lie in technology that is designed to better our lives, including AI and ML, productivity aids, and smart fitness. He is also passionate about cars and the decarbonisation of personal transportation. As an avid bargain-hunter, you can be sure that any deal Craig finds is top value!

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.