User privacy must come first with biometrics

Biometrics in action.
(Image credit: Shutterstock)

The rapid rise and expansion of artificial intelligence (AI) use cases in recent years has led to companies sharply increasing experimentation and adoption of facial recognition and other biometric technology in their consumer-facing products and services. Apple pioneered this when the company introduced Face ID, allowing users to open their iPhones with a simple scan of their face, transitioning the use of biometric data from innovation to normalization.

Now, biometric data is a common form of personal currency, a firewall entirely unique to the individual. Use cases have expanded to airports with biometric boarding, mobile banking and e-commerce to facilitate and authenticate transactions, and even with various branches of law enforcement using it for surveillance purposes.

The benefits of AI-powered facial recognition technology are off the charts, with potential for dramatic increases in efficiency, security and ease of use across industries. But with upside comes an equally compelling downside, as organizations need to consider the privacy risks and concerns associated with collecting and using biometric data at scale.

Consumer trust

Firstly, consumer trust isn’t where it should be for technology vendors: according to the Pew Research Center, only 36% of survey respondents claimed they trusted technology vendors to use facial recognition technology responsibly. Technology vendors need to consider what negative implications a shift to biometric data may have on consumer consent, data governance and compliance with the various data privacy rules and regulations to avoid fines and penalties for misuse. Early returns on companies like Rite Aid attempting to walk that line have ended poorly, and with support for data privacy legislation increasing as society prepares for more AI, biometric data remains a key playing field for the hearts and minds of the public.

This is compounded by the fact that the adoption and evolution of facial recognition technology is moving faster than regulators and vendors can keep up with. The EU’s upcoming AI Act has placed facial recognition technology in the “unacceptable risk” category, but without a clear path to enforce AI regulation, the first wave of laws will have less impact on safe widespread rollout of the technology than developers themselves will have.

Gal Ringel

CEO and Co-Founder, Mine.

Risk management and user privacy

When it comes to risk management and user privacy, it’s crucial for businesses to understand how facial recognition technology and other expanded use cases for AI extend beyond the surface:

Threat to individual privacy and personal rights: With the scale of facial recognition used in public places, soon users and citizens will not be able to go virtually anywhere in public without surveillance, posing a major threat to privacy when many already feel vulnerable.

Expanded risk surface for data vulnerabilities: Organizations are collecting more personal identifiable information (PII) than ever, but an increase in biometric data will exponentially increase that amount, creating more data than nearly any company is equipped to safely handle.

Increases opportunity for fraud and identity theft: PII is incredibly sensitive and, when compromised, can provide criminals with a wide range of access to high-profile bank accounts or health records and other valuable data caches. There are already cases of iPhones being hacked through face ID, and unimaginably large databases of biometric data only make future incidents all the likelier.

Programming biases and imperfections: Because AI-fueled facial recognition technology identifies facial characteristics based on how the models are trained and how databases are populated, it holds inherent bias and can misidentify certain groups, leading to privacy harms and the perpetuation of entrenched social biases.

With these potential issues in mind, companies developing facial recognition technology must approach the process with a holistic view of the impacts of the day-to-day user experience alongside the long-term customer journey and individual well-being. Social media developers likely could not have foreseen that their platforms would lead to universal drops in people’s attention spans or sharp rises in anxiety among the population. As we venture further and further into technology that was once deemed futuristic, privacy harms and technological repercussions have to be mulled over and incorporated into the earliest stages of design.

Innovation and security

This is not said to fearmonger; when done right, AI-based facial recognition technology can offer an innovative and more secure approach to identification verification.

Since this is an emerging technology still in its early stages, there’s time for organizations to ensure they’re keeping pace with innovation, while protecting customer privacy and biometric data. Here are a few best practices to consider when creating and adopting facial recognition and AI in business models and consumer products:

Transparency and Communication: Scanning someone’s face when they’re not aware it’s happening, where the data’s going, or who will have access to it can not only be invasive, but illegal, if not conducted within proper guidelines. Most current data privacy laws operate on some form of consent models, and that will be a tricky path to take for facial recognition systems scanning thousands of people daily. That makes it critical for organizations to clearly but quickly educate consumers on the tech and to gain user consent in a transparent manner.

Ensure diversity and quality in dataset programming: AI-tools can only be as good as their algorithms and datasets. Organizations and tech vendors need to train facial recognition platforms with AI algorithms that are programmed with a widely diverse and ranging dataset that represents multiple subsets of people and faces to avoid bias and minimize harms for underrepresented groups.

Protect the data: After collecting biometric data on a mass scale for hundreds to thousands of users, it’s vital to use the right safeguards to protect the sensitive data you capture. Organizations must maintain a comprehensive data protection and security strategy to ensure maximum defenses are in place to avoid data breaches and leaks. I would argue current security frameworks will soon become inadequate, meaning companies developing AI and using data to train it must venture above and beyond current data security standards to ensure the technology of tomorrow is safe for consumers.

Summing up

User privacy needs to be prioritized when handling biometric data. This information is so sensitive and personal that any innovation it can drive must take a backseat to privacy, as the harms of poorly implemented facial recognition technology outweigh the benefits.

As biometrics continues to go mainstream, data discovery, data classification and the handling of sensitive information will become mainstay on IT task lists. But the key to not overwhelming IT is to incorporate data privacy principles and tactics at the start of development, so problems can be tackled proactively rather than reactively.

This will be tech’s main challenge in the coming years. With AI fever everywhere, users will soon expect to access facial recognition services and products in a more personalized, efficient way, without compromising on the privacy front. If companies fail to accomplish that and to keep users safe, there is little reason to invest in AI. But if companies can manage the task and protect privacy alongside innovation, then the future is now.

We've listed the best privacy app for Android.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Gal Ringel

Gal Ringel is the co-founder and CEO of Mine.