AI models for patient care: The transformers will see you now

An image of a doctor showing a patient a medical record
(Image credit: Pixabay)

Healthcare is struggling.

The British Social Attitudes Survey finds satisfaction with the health service has dropped to a new low, with less than a quarter of people stating they were satisfied with the NHS in 2023. That’s a hard pill to swallow. And points to an incredibly fragmented and broken system.

One of the big challenges plaguing the healthcare system is the growing elective backlog. So it’s not surprising to see that waitlists and staff shortages continue to be listed as the biggest concerns of patients.

As people live longer, many with chronic conditions, hospitals will need to cope with a 40% increase in demand by 2035, with the IFS predicting spend on healthcare will need to increase at 3.3% a year over that time.

Whilst healthtech often lags behind other sectors for innovation due to necessary regulatory and patient safety requirements, the new era of AI productivity offers a glimmer of hope.

The transformer architecture - a form of deep neural networks first published in 2017 - powers today’s best Large Language Models (LLMs). In addition to training them to understand and create text, image and audio content, the transformers can be trained to understand other types of problems.

When we combine AI agents based on different models that are optimized for their own problem areas, we will get a suite of helpers to allow tomorrow’s doctors and nurses to care for a greater number of patients, at a lower cost - and with better outcomes for patients.

Perran Pengelly

CTO of DrDoctor.

Can we trust AI?

AI models aren’t perfect. Traditional AI models can predict sequences, look for anomalies or categorize items, but their success is measured with statistical analysis of how often they were correct. The LLMs - built using transformers - work by predicting an outcome. And while often right, they are sometimes wrong.

For those who are worried about AI in healthcare, a better way to think of this is to focus on the error rate of an AI model. It’s not enough for AI to simply match the error rate of humans. It needs to be 10x, 100x or possibly 1000x safer to build the necessary trust.

In healthcare, not all situations are equal in terms of risk and some AI uses are well established. For example, a trained doctor or nurse will be familiar with the need to challenge AI generated analysis of diagnostic images in a way a member of the public wouldn’t.

Solve the easier problems first

So much of healthcare is a communication problem. Whether this is patients struggling to get through on the phone or doctors writing notes for their colleagues.

LLMs will make a rapid improvement here through summarizing, categorizing, transcribing, translation and voice. The technology that is rolling out across contact centers and help desks in other industries will help lift the pressure from over stretched booking teams and improve the experience for patients.

Analyze the medical records

Whilst transformer based LLMs are trained on vast volumes of data from the internet to tune their billions of parameters, they have had rather limited ‘context windows’ - which is the size of the input you can enter to get your result out. That is changing rapidly and can now assimilate even the thickest of the electronic patient notes files - critical for efficiency gains in healthcare.

Transformers have a stage called ‘attention mechanism’ which learns how different inputs are related to each other. This might help it understand that the words ‘big cat’ are closely related to ‘lion’, or in a model trained more medically, it can help it to understand the interactions of different drugs.

With greater digitization of medical records, we’ve been able to bring in a number of automated rule sets that the systems apply for things such as medicines and allergies. These rules work where an item has been coded in the Electronic Health Record. But note, more information resides in the free text documents that make up the bulk of a patient’s file. AI models will enhance this to analyze the patient’s notes and medical history, to flag things that may have been overlooked.

Give your assistant a goal

LLMs today excel at tasks which are knowledge based. They can understand your intent, context and can generate good responses. The focus now is on how to make them better at reasoning tasks. These typically involve the model creating a series of subtasks towards their goal called a chain-of-thoughts. As they act on each subtask they may then update their chain-of-thought based on observations.

This method is powerful when the model is given skills and access to APIs, such as for booking appointments or messaging patients.

In the future these assistants could coordinate and arrange new patient referral bookings while keeping the patient informed, whilst also managing results back from diagnostics.

Keeping control

These advances would have seemed more science fiction than reality a few years ago.

But now, the tech is ready and the key to adoption is the acceptance and trust of people - both of clinicians and patients. To aid this, it is important to allow choice and control in the process. This may be by allowing patients to opt out, or for a doctor to override a suggestion from a model. As well as being easier to adopt, system suppliers will welcome this too as it limits the risk and liability for them.

We've featured the best AI Chatbot for business.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Perran Pengelly is the CTO of DrDoctor.