Voice biometrics: the new defense against deepfakes

Shady hooded figure with fragments of text - voice biometrics defense against deepfakes
(Image credit: TheDigitalArtist / Pixabay)

It's no surprise we’ve seen a huge increase in fraud rates over the past year – the absence of face-to-face contact under lockdown has made it easier than ever for fraudsters to get past identity checks. Not only this, but the transition to a digital economy has created a sophisticated automation infrastructure – now, attackers can pounce and evade authorities at every turn. As the world has become increasingly digital over the past decade, deepfakes are emerging as a huge security threat in a world already plagued by misinformation.

About the author

Stephen Ritter is CTO at Mitek.

We’ve grown accustomed to seeing high profile figures, from Mark Zuckerberg to Queen Elizabeth, used to raise awareness of the deepfake threat or for our entertainment. However, we’re veering to a point where we’ll soon see the technology used for more sinister purposes as it becomes more accessible to fraudsters. In the not-too-distant future, it will likely be normal people bearing the brunt of deepfake attacks, as fraudsters target the general public for financial gain. With a UCL study ranking deepfakes as one of the biggest threats we face today, it’s time to strengthen our defenses – and put fraudsters on the defensive.

An impending surge in fraud

Touted as the 2020s’ answer to Photoshopping, deepfakes use artificial intelligence to replace the likeness of one person’s appearance or voice with another in recorded video. Awareness of the technology stems from memes and fake videos shared online, but its ability to manipulate facial expressions and speech has caught the attention of fraudsters. High-profile instances of successful deepfake hacks include a 2019 attack, where cybercriminals used fake recordings to impersonate a CEO and demand the transfer of $243,000. Until recently, successful deepfake attacks remained few and far between – but the pandemic put paid to that, opening the door for fraudsters.

We’re now seeing an uptick in deepfake tech and service offerings across the dark web, where users are sharing illicit software, best practices and how-to guides. All of this demonstrates a concerted effort across the cybercrime sphere to sharpen deepfake tools, which in turn points towards the first signs of a new wave of impending fraud.

The worrying thing here is that deepfakes is one of those scary pieces of tech that allow cybercriminals to attack at scale. While successful attacks aren’t yet a common occurrence, they could become endemic as this technology continues to evolve.

In banking, the context of rising branch closures coupled with the effects of the pandemic means now, a new customer could join a bank without having to go into a branch. They could, theoretically, open a number of new credit cards and after a few months of solid credit history, max them all out and disappear. These unscrupulous individuals can then do the same again elsewhere, stealing identities en masse. By not fighting back against this, we may eventually arrive at a point where such instances are out of our control. So how can we possibly protect people against deepfake attacks?

The new line of defense

Biometric authentication is leading the charge in the growing fight against identity fraud. Banks are already using facial biometrics, in conjunction with liveness detection, to verify faces and documents, and ensure fraudsters aren’t bypassing screening processes with photos of a photo, for example. But as the capabilities of deepfakes continue to develop, the weapons in a fraudster’s armory could put them ahead of banks’ own systems. That’s why it’s time to add another link to the security chain and send the fraudsters running – and this is where our voices come in.

Many of us are already using our voices to manage our everyday lives – we ask Alexa for the weather, Siri to call Mum and our plugs to turn the lamp off – but we often associate voice technology with hurting our privacy, rather than protecting us. In reality, voice offers a powerful and convenient form of biometrics that will have a critical role to play in improving anti-fraud defenses. Where one form of biometrics presents a solid defense against would-be hackers, two offers a lot more protection, which leads to lower fraud rates. In our experience, the combination of both voice and face biometrics makes the verification process almost impenetrable by fraudsters, offering four layers of protection – liveness and recognition of both face and voice.

Not only this, but voice biometrics can be collected from our devices easily and passively – meaning it isn’t hard to get consumers on board. It takes no extra time, for example, if in addition to using liveness detection to check a selfie when someone signs up for a new account, banks then ask them to repeat a phrase. The process may add a few seconds to the user experience, but it’s hardly a major hoop to jump through. And there will always be trade-offs when striking a balance between security and convenience.

Managing risk in the digital economy

A focus on the right use cases is key to making voice biometrics an everyday form of authentication. The last thing people want is to use voice in a situation where it isn’t needed. Where text-based passwords are more than capable of protecting accounts on a retailer’s website, securing our bank accounts calls for more sophisticated means of authentication. However, people are unlikely to have a problem with putting layered biometrics into place if it means keeping their finances safe – especially if it doesn’t take more than a few seconds to do.

Businesses will never be 100% protected, and any product which promised total security would ultimately be unusable by consumers. Instead, risk management is the name of the game. Combining traditional security steps and layered biometrics will give us the strongest chance of forcing fraudsters onto the back foot – especially in our new ‘digital-everything’ world.

Stephen Ritter

Stephen Ritter is Chief Technology Officer at Mitek, a global leader in mobile deposit and digital identity verification solutions.