‘A confident answer isn’t the same as a correct one’ — I asked medical experts whether you should use ChatGPT for health advice, and I was shocked by their answers

Doctor and AI
(Image credit: Getty Images/chiewr)

People are increasingly turning to ChatGPT and other AI tools for health advice. They’re using them to make sense of symptoms, decode medical letters, understand diagnoses, seek reassurance before appointments, and prepare questions for clinicians.

That shift isn’t surprising. People have always looked online for health information, but AI tools are different. They’re conversational, personalized, always available, and often sound confident in their answers. Many of the same factors driving interest in AI for therapy are now shaping how people use it for health.

How are people using AI for health?

Healthcare providers should now assume that any patient they see may have already asked ChatGPT before coming in.

Jennifer Hinkel, a healthcare consultant and president of Signa Sciences

“AI use is ubiquitous,” says Jennifer Hinkel, a healthcare consultant and president of Signa Sciences. “Healthcare providers should now assume that any patient they see may have already asked ChatGPT before coming in – and that they’ll ask it again after they leave.”

I’ve seen a lot of claims that ChatGPT is now the first port of call for health questions, but what are people actually asking? “We’re seeing patients use it mainly for questions about their medications, symptoms, and to read about their diagnoses,” says Ranya Habash, MD and chief medical officer at Helfie AI.

Sure, people have always used the internet for health advice and Googled their symptoms. But with AI, things feel different. Patients are largely using ChatGPT as a more conversational version of Google. “ChatGPT feels more personal because it responds in natural language patterns,” Habash says. “It sounds like your friend, and it sounds confident.”

It’s easy to see why a tool like ChatGPT is so appealing. It sounds friendly, draws on vast amounts of information, and is very accessible. Many people already turn to it for work, planning, and personal advice, so asking a few health-related questions can feel like a natural extension of how they’re using it day to day rather than a deliberate decision to adopt a new health app or platform.

Where AI can genuinely help

“There is real value in patients being able to have what feels like a private conversation on health issues with a machine that has been programmed to be ‘helpful’ and show ‘caring’,” Hinkel says.

She also believes there’s something comforting about having constant access to information. “Whether it’s 2am or between 9am and 5pm, people can get answers,” she tells me. “We know health outcomes are better when patients are informed and engaged with their health, so if ChatGPT helps them get there, I see that as a positive impact.”

Both experts agreed that, when used carefully, tools like ChatGPT could help people feel more informed and engaged in their own health. “In most cases, patients having more information is useful and aligned with improved health outcomes,” Hinkel says. “When patients better understand their health, they’re better able to adopt behaviours that help them stay healthy.”

I spoke to several people who regularly use AI tools for health advice and they echoed this view. Sophie (whose name has been changed here) told me she’s been relying on ChatGPT during early pregnancy, particularly after a stressful scan she attended alone.

“Uploading my notes and getting everything explained was so helpful and reassuring,” she said. “I didn’t really take in what the doctors were saying at the time. It’s the emotional framing afterwards that’s been helpful.” She also said that she’d relied on ChatGPT for follow-up appointments, which gave her guidance on what to expect and what important information to communicate.

Habash agrees that this “translator” role is where AI shines. “It can turn medical jargon into plain language,” she says. “That helps patients and it helps doctors too. We want to make sure that our patients have a good understanding of their conditions.”

An image of a doctor showing a patient a medical record

(Image credit: Pixabay)

The harms experts are worried about

We’re not democratizing information if only people who know how to ask get full answers.”

Jennifer Hinkel, a healthcare consultant and president of Signa Sciences

The same traits that make AI tools appealing, like confidence, fluency and reassurance, are also where some of the dangers might lie.

“I do see a few major risks,” Hinkel tells me. The first is that the quality of information depends heavily on how questions are asked. “The comprehensiveness of information correlates with the sophistication of the prompt,” she explains. “We’re not democratizing information if only people who know how to ask get full answers.”

The idea that you need the right prompt to get a useful answer comes up again and again with AI. I’ve seen so-called AI experts blame users for poor results, arguing that their prompting strategy isn’t good enough – often while trying to sell a course or framework to fix it. But that misses the real issue. I don’t see it as a user failure but a mismatch between how AI is sold to us and how it actually works. AI tools are sold as “ask anything” systems, with no mention of best-practice prompting at sign-up, so it makes little sense to fault people for not knowing rules they were never told existed.

The second risk is more straightforward. We know AI tools get things wrong. Sometimes very wrong. That can be due to misattributed sources, outdated information, or hallucinations, which is when a system confidently presents information that isn’t true.

That can happen in all sorts of contexts. But what’s specific to health is that while an AI can pull together large amounts of information, it isn’t a doctor and can’t meaningfully diagnose.

Claire (whose name has been changed) told me she noticed a growth on her nose and, while waiting to see a dermatologist, uploaded a photo to ChatGPT to ask what it might be. “It said it’s either skin cancer or something unharmful,” she says. “Which wasn’t very helpful.”

“When I said, ‘do you think it could just be a wart?’ it then said ‘yes, it could be a wart.’ So ChatGPT didn’t score any doctor points there,” she tells me.

We know that ChatGPT agreeing with everything a user tells it is a problem. But it can have more far-reaching consequences when it comes to dealing with mental and physical health.

“AI advice sounds friendly and confident. That’s a blessing and a curse,” Habash says. “A confident answer is not the same as a correct one, and that can delay care.”

When it comes to diagnosis, context matters, and it’s something AI often lacks. That’s partly because a user can’t realistically supply every detail a clinician would consider, and partly because an AI doesn’t have a diagnostic process or lived clinical experience to know which signals matter most.

“The same symptoms can mean very different things depending on age, sex, and risk factors,” Hinkel explains. “But the AI doesn’t know the underlying risk unless it knows who’s asking.”

Habash adds that clinicians integrate nuance instinctively. “Chest pain, stroke signs, pregnancy-related pain, these depend on context, examination, and measurements. A conversation alone can’t capture that.”

Both experts raised concerns about health literacy and equity. “Lower health literacy is likely to give you more superficial answers,” Hinkel says. “Those who already understand their condition and know what to ask may benefit. Those who are genuinely confused may be left behind.” That creates a troubling divide if AI tools are positioned as a shortcut to better healthcare access.

Habash also flagged anxiety and chronic illness as risk factors. “People with overlapping conditions and multiple medications face more risk from generic advice. A confident answer can feel more definitive than it is.”

Doctor AI

(Image credit: Hush Naidoo)

What to keep in mind if you use AI for health

Experts suggest treating AI as supporting tools, not substitutes – and being deliberate about how you use them.

Understand how they work

You don’t need to know much. Just that AI often sounds confidently right (even when it’s not) and draws on vast amounts of information. Hinkel says it’s important to be aware that tools like ChatGPT will often serve up information based on frequency. “If something appears often in training data, it may rise to the top,” she warns, “even if it’s wrong.”

Check for outdated advice

Similarly, it’s useful to remember that AI tools learn from information that can be really old. There are also often training data cut off points, which means even some of the newest information it has can be from a few months ago and may no longer reflect current standards of care. “Timeliness matters in health,” Hinkel says.

Ask better questions, but stay wary

More detail in your initial prompts can help, but even prompting tricks aren’t a safety net. “Verify everything,” Hinkel says.

Be mindful of privacy

We don’t know with absolute certainty how AI companies are currently using your data or might use it in the future. But it’s worth stopping to think whether you’re comfortable sharing highly sensitive information about your health.

Use AI for lower-risk tasks

Habash recommends using AI tools for advice about diet, exercise, stress relief, and understanding tests or medications, but not for diagnosis or treatment changes.

Pay attention to your body

Most importantly, don’t take ChatGPT’s advice above your own instinct. “If a symptom is sudden, severe, or getting worse, that’s your body telling you to get help,” Habash says.

TOPICS
Becca Caddy

Becca is a contributor to TechRadar, a freelance journalist and author. She’s been writing about consumer tech and popular science for more than ten years, covering all kinds of topics, including why robots have eyes and whether we’ll experience the overview effect one day. She’s particularly interested in VR/AR, wearables, digital health, space tech and chatting to experts and academics about the future. She’s contributed to TechRadar, T3, Wired, New Scientist, The Guardian, Inverse and many more. Her first book, Screen Time, came out in January 2021 with Bonnier Books. She loves science-fiction, brutalist architecture, and spending too much time floating through space in virtual reality. 

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.