Unlocking science: building AI researchers can trust

Robots in a data center
(Image credit: Getty Images)

Productivity is a pressing problem for many governments, and the UK is no exception.

That pressure is trickling down to British researchers as the driving force propelling the UK towards the top of international R&D rankings, based on the quality and impact of their research.

International competition is hotting up, however, and UK researchers will need to do more to maintain the UK’s global standing.

Article continues below
Maxim Khan

SVP for Academic and Government Solutions, Elsevier.

Wider trends in the research sector compound this pressure. Globally, only 45% of researchers said they have sufficient time for research, and just 33% expect funding to increase in the next two to three years.

AI is often held up as a productivity silver bullet. The sheer value of the UK’s investment into AI technologies – collectively, billions of pounds – shows that while productivity is not the only goal, it’s certainly towards the top of the list.

The Government’s AI for Science strategy specifically champions the idea of using AI to “transform scientific productivity and progress”.

AI for research: not as simple as it sounds

It’s no surprise that AI tools are being adopted by researchers and integrated into their workflows – the problem is that they're being adopted faster than governance can keep up. Over half of researchers now say they use AI tools in their work, but only two in ten believe generic AI tools are trustworthy.

Compounding these problems are the systemic challenges of AI use – generic AI tools tend to simplify responses, smoothing out detail and in the process, erasing a degree of accuracy, transparency and context. For lay purposes, these problems are not insurmountable.

But for researchers, who operate at the cutting edge of reproducibility and scrutiny, it’s simply not enough. Many talk about measuring ‘AI productivity’ in terms of what AI can produce vs what a (slower) human can produce.

This ignores the trustworthiness of the AI's output: if a human needs to verify everything the AI tool produces, it unravels all the so-called ‘productivity’ gains.

Building AI tools for research

These problems shouldn't prevent researchers from using AI with confidence, but first, the problem needs to be reframed. As with any tool, it’s about picking the right tool for the job. To date, researchers haven’t had the right tools.

The solution can be seen in many other fields: specialized tools for specialized jobs. Doctors use ECG monitors, not smartwatches, to measure heart rates; commercial builders use automated production systems for precision measurements, not a tape measure.

While lay tools will do the job for fitness fans who want to monitor their heart rate, or DIY home improvement enthusiasts, the professionals use industrial-grade products that aren’t available on the high street.

The same ‘right tool for the right job’ principle should apply to AI in research. Professional AI tools designed with researchers’ highly-specialized requirements in mind, that they can – importantly – trust to do the task at hand.

Once that trust is established, it removes the need for excessive scrutiny of the AI’s output. If researchers don’t need to waste time on this, it’s a real productivity gain – not one that looks good on paper, but pointless in practice.

What does ‘the right AI tool’ for researchers look like?

Firstly, what it doesn’t look like is an assumption – you need to ask the researchers themselves. In working closely with the community, we’ve learned that “researcher-grade” AI has four key ingredients:

- Researchers want tools that support critical thinking – tools that flag any uncertainties in what they have produced, rather than glossing over them.

- AI-generated insights should be contextualized, transparent and traceable – meaning that the AI tool shows its thinking and what sources were used to reach those conclusions.

- "Garbage in, garbage out" applies to an extreme degree in research. Peer-reviewed research is the gold standard and it should be brought together on publisher-neutral platforms to ensure that researchers can see the whole picture, not just a part of it.

- Finally, everything must be grounded in a foundation of data privacy, security and responsible use of AI.

In the world of research, AI must jump through more hoops to earn trust. It must be transparent about its sources, protect sensitive data and support human judgment rather than attempt to replace it.

Achieving that goal requires moving beyond generic AI and toward systems built with rigor and responsibility at their core.

We've ranked the best online learning platforms.

This article was produced as part of TechRadar Pro Perspectives, our channel to feature the best and brightest minds in the technology industry today.

The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/pro/perspectives-how-to-submit

TOPICS

SVP for Academic and Government Solutions, Elsevier.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.