Protecting digital integrity in the age of deepfakes and identity fraud

A person at a computer in contact with many people securely.
(Image credit: iStock)

Has identity fraud experienced a coup? AI-powered synthesized media attacks, commonly referred to as deepfakes, have turned the fraud landscape on its head, spiking 3000% in 12 months and dominating discussions around what may destabilize digital integrity and trust in the years to come.

Deepfakes are fast becoming the tactic du jour for fraud. The increased availability of generative AI and face-swap apps are enabling cybercriminals to significantly increase their deepfake fraud attempts by impersonation. It can enable fraudsters to manipulate someone’s facial expression, voice, body language, and even skin texture. As sophisticated as the technology has become, it will only continue to evolve and become more convincing. Online fraud tactics are akin to a virus, mutating to avoid cyber defenses to cause maximum damage. 

That’s why businesses must stay one step ahead with a robust prevention strategy that can identify and protect against emerging threats, and keep end-users and their business safe. At the same time, this strategy must also be nuanced and recognize that it cannot cause unnecessary friction for legitimate end-users looking to register or access online services. It must be user-friendly and accessible. Achieving this will protect digital integrity amid surging identity theft, and crucially maintain inclusion and trust in online businesses.

Vincent Guillevic

Head of Fraud Labs, Onfido.

How the automation era conceived deepfake fraud

The global fraud landscape has shifted substantially in recent years, in line with wider digital trends – notably mainstream accessibility to AI and automation tools. Pre-pandemic, a fraudster followed the typical working week pattern, a clock-in-clock-out, nine-to-five shift with weekends typically seeing a drop in activity. But these habits have changed as fraudsters have understood that AI and automation can mean they can scale their attacks, around the clock, to hit as many targets as possible. This means that industries which have historically witnessed high fraud volumes as a result of large cash incentives, like gambling and gaming, have seen rates spike as high as 80% in the last year.

As businesses have moved to protect their operations from these fraud surges, bad actors have expanded their library of attack options. While most identity fraud is still focused on producing physical ID counterfeits, fraudsters are experimenting with AI to alter digital images and videos – creating deepfakes – to commit identity fraud, bypass cybersecurity systems and develop fake online media content.

Getting granular: Deepfakes versus cheap fakes

When we think about deepfakes, we often think about sophisticated videos depicting political or celebrity impersonations. But it’s important to clarify that not all deepfake approaches are the same, and fraudsters will look to deploy the technology on varying scales, based on resources, technical skillsets and desired outcomes. The moniker ‘cheap fakes’ signals the best differentiator – these are significantly less sophisticated than what we’d consider as a typical deepfake. Think budget film versus blockbuster – same concept, very different execution. Also known as shallowfakes or low-tech fakes, cheap fakes use basic video editing software to manipulate imagery. They may include minor tweaks like caption changes or image cropping, but they are much easier to detect, particularly to the untrained eye, as they lack realism.

But the threat they pose shouldn’t be overlooked; they still perpetrate large quantities of identity fraud. Particularly in today’s climate, where difficult economic circumstances mean many are becoming amateur fraudsters, cheap fakes are the first option for basement hackers armed with basic malware code. Attacks can still be deployed in tandem with larger fraud corporations to impersonate legitimate customers and steal identities at onboarding or accessing existing accounts. They can have other uses too, for instance launching a bespoke misinformation campaign which could reach and defraud large audiences such as the deepfake of Martin Lewis with an investment scam.

A proactive approach to deepfake detection

In any form, deepfakes can disrupt access to online services, manipulate or mislead people, and dismantle company reputation, so businesses must take a proactive approach to mitigating the threat. But they must find the right level of friction – ensuring customers can register or access services seamlessly, while keeping bad actors out.

Firstly, businesses must train and learn how to spot a deepfake, and there are certain tell-tale signs to look for. In videos, for instance, AI hasn’t yet recreated natural eye movement and blinking, and a closer look at a deepfaked individual may show facial glitches and prolonged passivity. Videos also frequently fail to sync audio and image seamlessly, so businesses should follow the audio closely and watch the speaker’s pronunciation and any unnatural pauses for inconsistencies. Colors and shadows suffer the same shortcomings, and perfect accuracy is rare. Businesses should look for shadows that appear out of place, particularly when the person moves, or colors that change.

Secondly, businesses need to invest in their cyber defense resources. Fraud is a game of cat and mouse, and businesses need the right partner and platform onboard to bolster their defenses to stay ahead. As deepfakes are commonly hosted on web browsers as opposed to applications native to any given operating system, businesses should be looking to incorporate a solution that aligns with web-native customer journeys and detects pre-recorded videos, emulators, and fake webcams. There will also be times when AI needs to refer more sensitive or complex cases for review, so the right investment will combine the power of AI with human expertise, for a blended and comprehensive security experience. This way, customers are not falsely rejected, and any convincing deepfake attempts can be identified by a trained expert.

Avoiding the iceberg

There’s no doubt that deepfakes have transformed the nature of identity fraud amid today’s digital landscape. Operating in the mainstream, deepfakes pose a significant threat to digital trust and integrity and have the potential to destabilize the relationship between customers and online businesses. Businesses must go on the offensive, training their teams to spot deepfake attempts and invest in sophisticated AI and biometric solutions that can keep them one step ahead. That’s how they’ll avoid the deepfake iceberg and set themselves up for sustainable long-term growth.

We've listed the best identity theft protection for families.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro

Vincent Guillevic, Head of Fraud Labs, Onfido.