Skip to main content
Tech Radar
  • Tech Radar Pro
  • Tech Radar Gaming
Tech Radar Pro TechRadar IT Insights for Business
Subscribe
RSS
(opens in new tab) (opens in new tab) (opens in new tab) (opens in new tab)
Asia
flag of Singapore
Singapore
Europe
flag of Danmark
Danmark
flag of Suomi
Suomi
flag of Norge
Norge
flag of Sverige
Sverige
flag of UK
UK
flag of Italia
Italia
flag of Nederland
Nederland
flag of België (Nederlands)
België (Nederlands)
flag of France
France
flag of Deutschland
Deutschland
flag of España
España
North America
flag of US (English)
US (English)
flag of Canada
Canada
flag of México
México
Australasia
flag of Australia
Australia
flag of New Zealand
New Zealand
Technology Magazines
(opens in new tab)
Technology Magazines (opens in new tab)
Why subscribe?
  • The best tech tutorials and in-depth reviews
  • Try a single issue or save on a subscription
  • Issues delivered straight to your door or device
From$12.99
(opens in new tab)
View (opens in new tab)
  • News
  • Reviews
  • Features
  • Opinions
  • Website builders
  • Web hosting
  • Security
Trending
  • Best standing desk deals
  • Best cloud storage 2023
  • What is Microsoft Teams?
  • Windows 11 for business

When you purchase through links on our site, we may earn an affiliate commission. Here’s how it works.

  1. Home
  2. News
  3. Computing

The hidden costs of the shift to digital healthcare

By Rebecca Morris
published 21 March 2022

Our medical data is valuable and companies need to do a better job of protecting it

digital healthcare
(Image credit: Shutterstock / elenabsl)

Since the start of the pandemic, a large proportion of healthcare provision has shifted online. We now have virtual visits with our doctors, text our therapists, and use apps to display our vaccination status and see if we’ve been exposed to Covid-19.

While this may be convenient in some scenarios, both patients and the healthcare industry as a whole need to pay closer attention to data security and privacy. That's because the information from our digital health tools is attractive to a variety of bad actors.

According to the experts, there are a few ways in which we can protect our data. But in the absence of stricter regulation, we largely have to rely on digital healthcare providers to do right by their customers, which has created a host of problems.

Risks to data security and privacy

Our medical records are a treasure trove of personal data. Not only do they include relatively standard information (e.g. your name, address and date of birth), they may also include lab results, diagnoses, immunization records, allergies, medications, X-rays, notes from your medical team and, if you live in the US, your social security number and insurance information.

All this personal information is incredibly valuable. Medical records sell for up to $1,000 (opens in new tab) on the dark web, compared to $1 for social security numbers and up to $110 for credit card information. And it’s easy to see why; once a thief has your medical record, they have enough of your information to do real and lasting damage. 

First, thieves can use your personal information to receive medical care for themselves, a type of fraud known as medical identity theft. This can mess up your medical record and threaten your own health if you need treatment. If you live in the US or other countries without universal healthcare, it can also leave you financially responsible for treatment you didn't receive.

Plus, your medical record might contain enough information for thieves to steal your financial identity and open up new loan and credit card accounts in your name, leaving you responsible for the bill. And, in the US, if your medical record contains your social security number, thieves can also file fraudulent tax returns in your name in tax-related identity theft, preventing you from receiving your tax refund.

The highly sensitive nature of medical records also opens up other, even more disturbing, possibilities. If, say, you have a stigmatized health condition, a thief can use your medical record as ammunition for blackmail. And in today’s politically charged climate, your Covid-19 vaccination status could be used for similar purposes. 

Worse still, as cybersecurity researcher and former hacker Alissa Knight explained in an interview with TechRadar Pro, "if I steal your patient data and I have all your allergy information, I know what can kill you because you're allergic to it."

Fraud

(Image credit: Shutterstock / Sapann Design)

What makes the theft of health information even more serious is that, once it’s been stolen, it’s out there for good. 

As Knight explained, "[it] can't be reset. No one can send you new patient history in the mail because it's been compromised.” So dealing with the theft of your health information is much harder than, say, dealing with a stolen credit card. In fact, medical identity theft costs, on average, $13,500 (opens in new tab) for a victim to resolve, compared with $1,343 (opens in new tab) for financial identity theft. And, unfortunately, medical identity theft is on the rise (opens in new tab).

But thieves are not the only ones interested in your health data. It’s also incredibly valuable to advertisers, marketers and analytics companies. Privacy regulations, like HIPAA in the US and the GDPR and DPA in Europe and the UK, place restrictions on who healthcare providers can share your medical records with. But many apps developed by third parties don’t fall under HIPAA and some don’t comply with GDPR.

For example, if you download a fertility app or a mental health app and input sensitive information, that data will probably not be protected by HIPAA. Instead, the protections that apply to your data will be governed by the app’s privacy policy. But research (opens in new tab) has shown that health apps send data in ways that go beyond what they state in their privacy policies, or fail to have privacy policies at all, which is confusing for the consumer and potentially illegal in Europe and the UK. 

So, while convenient, online and mobile health tools pose a real risk to the security and privacy of our sensitive data. The pandemic has both exposed and heightened this risk.

Security failures during the pandemic

The pandemic has seen an alarming rise in healthcare data breaches. The first year of the pandemic saw a 25% increase (opens in new tab) in these breaches, while 2021 broke all previous records (opens in new tab). 

Some of these security lapses involve pandemic-focused digital health tools. For example, UK company Babylon Health introduced a security flaw (opens in new tab) into its telemedicine app that allowed some patients to view video recordings of other people's doctors' appointments. And the US vaccine passport app Docket contained a flaw that let anyone obtain users' names, dates of birth and vaccination status from QR codes it generated, although the company was quick to release a fix.

Non-pandemic focused tools were also affected. For example, QRS, a patient portal provider, suffered a breach impacting over 320,000 patients (opens in new tab), and UW Health discovered a breach of its MyChart patient portal that affected over 4,000 patients (opens in new tab).

Knight’s research, however, shows that the security of digital healthcare is far worse than even these examples suggest. In two (opens in new tab) reports (opens in new tab) published last year, she demonstrated that there are significant vulnerabilities in the application programming interfaces (APIs), used by health apps. 

APIs provide a way for applications to talk to each other and exchange data. This can be extremely useful in healthcare when patients may have health records from different providers, as well as information collected from their fitness trackers, that they want to manage all in one app.

But vulnerabilities in APIs leave patient data exposed. One way this can happen is through what’s known as a Broken Object Level Authorization (BOLA) vulnerability. If an API is vulnerable to BOLA, an authenticated user can gain access to data they shouldn’t have access to. For example, one patient might be able to view other patients’ records.

babylon health

A bug in the Babylon Health app allowed patients to view recordings of other people's appointments in 2020. (Image credit: Shutterstock / Devina Saputri)

All the APIs Knight tested as part of the research documented in her first report were vulnerable to these kinds of attacks. And three out of the five she tested in her second report had BOLA and other vulnerabilities, which gave her unauthorized access to more than 4 million records (opens in new tab). In some cases, Knight told TechRadar Pro, she was able to “actually modify dosage levels [of other people’s prescriptions], so if I wanted to cause harm to someone, just going in there and hacking the data and changing the prescription dosage to two or three times what they're supposed to take could kill someone."

Although the reasons behind these security lapses are multifaceted, the rush to make apps available during the pandemic did not help. In Knight’s words, “security got left behind.”

But while the situation may seem bleak, Knight is relatively optimistic about the future. She believes that “true security starts with awareness” and insists "industries need to be educated on the attack surface with their APIs and know that they need to begin protecting their APIs with API threat management solutions instead of old legacy controls that they're used to”.

In the meantime, there's little consumers can do to protect their health data from API vulnerabilities. As Knight said, "a lot of these problems are outside of the consumers hands." She noted that "the responsibility is on the board of directors and the shareholders to make sure that companies are making more secure products.”

Privacy and the pandemic

Besides staggering security flaws, the pandemic has also brought about significant violations of privacy.

Some of these failures occurred in pandemic-focused apps. In the US, for example, the government approved contact tracing app for North and South Dakota was found to be violating its own privacy policy (opens in new tab) by sending user information to Foursquare, a company that provides location data to marketers. And in Singapore, while the government initially assured users of its contact tracing app that the data would not be used for any other purpose, it was later revealed (opens in new tab) that the police could access it for certain criminal investigations.

Mental health apps were also the subject of pandemic privacy scandals. For example, Talkspace, which offers mental health treatment online, allegedly data-mined anonymized patient-therapist transcripts (opens in new tab), with the goal of identifying keywords it could use to better market its product. Talkspace denies the allegations. More recently Crisis Text Line, a non-profit that, according to its website, "provides free, 24/7 mental health support via text message," was criticized (opens in new tab) for sharing anonymized data from its users' text conversations with Loris.ai, a company that makes customer service software. After the resulting backlash, Crisis Text Line ended (opens in new tab) its data sharing arrangement with the company.

Nicole Martinez-Martin, an assistant professor at the Stanford Center for Biomedical Ethics, told TechRadar Pro that one problem with mental health apps is that it can be "difficult for the average person, even informed about what some of the risks are, to evaluate [the privacy issues they pose]”.

digital healthcare

The privacy threats posed by healthcare apps can be tough for the average consumer to assess. (Image credit: Shutterstock / Stokkete)

This is especially problematic, given the demand for such apps thanks to the mental health crisis (opens in new tab) that has accompanied the pandemic. Martinez-Martin pointed out that there are online sources, such as PsyberGuide (opens in new tab), that can help, but she also noted "it can be hard to get the word out" about these guides.

Martinez-Martin also said that the Crisis Text Line case "really exemplifies the larger power imbalances and potential harms that exist in the larger system" of digital mental health.

But maybe there’s still reason to be cautiously optimistic about the future. Just as Knight believes that “true security starts with awareness”, perhaps better privacy starts with awareness, too. And the pandemic has certainly highlighted the significant privacy risks associated with digital health.

Martinez-Martin pointed to "regulation, as well as additional guidance at a few different levels, for developers and for clinicians using these types of technologies" as steps we can take to help tackle these risks.

What can be done?

While the pandemic has shown us the convenience of digital health tools, it has also thrown their security and privacy issues into sharp relief. Much of the responsibility for addressing these problems lies with the healthcare industry itself. For patients and consumers, however, this can be frightening and frustrating because companies may not have much, if any, motivation to make these changes on their own.

But consumers, patients, and security and privacy experts can push for stricter regulations and attempt to hold companies accountable for their failures. It's true that we may not always have the leverage to do this. For example, at the beginning of the pandemic, when in-person doctors' appointments were not available, we had no option but to give up some of our security and privacy to receive care via telehealth. However, the increased awareness the pandemic has brought to security and privacy issues can work to our advantage. For example, the public criticism of Crisis Text Line caused it to reverse course and end the controversial data-sharing relationship it had with Loris.ai. 

Basic security hygiene on the part of patients and consumers can also help. According to Stirling Martin, SVP of healthcare software company Epic, there are two steps patients can take to protect their data:

“First, exercise care in deciding which applications beyond those provided by their healthcare organization they want to entrust their healthcare information to. Second, leverage multifactor authentication when provided to further secure their accounts beyond just simple username and passwords.”

By taking advantage of the increased awareness of security and privacy risks, holding companies accountable, and practicing good security hygiene ourselves, we stand a chance of improving protections for our medical data.

  • Add another layer of protection to your accounts with the best security keys around

Are you a pro? Subscribe to our newsletter

Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!

By submitting your information you agree to the Terms & Conditions (opens in new tab) and Privacy Policy (opens in new tab) and are aged 16 or over.
Rebecca Morris
Rebecca Morris
Social Links Navigation

Rebecca Morris is a freelance writer based in Minnesota.  She writes about tech, math and science.

See more Computing news

TechRadar is part of Future US Inc, an international media group and leading digital publisher. Visit our corporate site (opens in new tab).

  • About Us (opens in new tab)
  • Contact Us (opens in new tab)
  • Terms and conditions (opens in new tab)
  • Privacy policy (opens in new tab)
  • Cookies policy (opens in new tab)
  • Advertise with us (opens in new tab)
  • Web notifications (opens in new tab)
  • Accessibility Statement
  • Careers (opens in new tab)

© Future US, Inc. Full 7th Floor, 130 West 42nd Street, New York, NY 10036.