How to protect your business in the age of deepfakes
Facing facts and voicing concerns
Deepfake technology isn’t a new concept. In 1991, Terminator 2 became the first film to create an entirely computer-generated character with realistic human movements (and incidentally, one of cinema’s most iconic villains). And ever since, media like movies and video games have constructed thousands of realistic, sympathetic, and engaging characters out of pixels alone.
This process used to be incredibly time-consuming and costly, the preserve of experts. But in recent years, the technology required to create photorealistic depictions of people has spread into the hands of everyday users. In 2021, a TikTok account named @deeptomcruise began posting humorous deepfake videos of Hollywood star Tom Cruise. It’s since amassed over 5.1 million followers and has even evolved into an industry-leading generative AI business. Meanwhile, as the accessibility of Generative AI tech cascaded, 2022 and 2023 saw a wider emergence of deepfake videos that spread like wildfire across social media.
With routine AI-generated imagery now fooling even the most skeptical of internet users, advancements in 2024 are set to impact us to an even greater extent. Sadly, like many types of initially innocent technologies, deepfakes are now being exploited for nefarious means, with the latest high-profile victim being Taylor Swift. And in a year of key world events like the UK general election and the Paris Olympics, these deepfakes may enter the mainstream for both consumers and businesses. So, how can we protect ourselves against their negative effects?
The realism and risks of modern deepfakes
Today, nine in ten (90%) of cybersecurity breaches are identity-related. Yet more than four in ten companies (44%) are still in the early stages of their identity security journey. The business value and importance of identity, particularly in the realms of security, must be prioritized.
Identity is a core element of cybersecurity. And in business terms, identity is all about ‘who’ has access to ‘what’ information. In the past, the ‘who’ was generally a person or group of people, and the ‘what’ a database or application. Today, the ‘whos’ have proliferated beyond internal employees to contractors, supply chain members, and perhaps even artificial intelligence. The ‘whats’ have expanded too, as more data moves through more systems—from emails to apps to the cloud and much more. The more users and entry points there are, the tougher it is to screen all identities and keep all data secure against growing threats. Even security measures that were previously thought to be advanced and watertight, such as voice recognition, are no longer a match for today’s AI-fueled risks.
Director of Strategy & Standards, SailPoint.
At SailPoint, we explored the threats of identity theft in a recent experiment. We used an AI tool to listen to recordings of our CEO Mark McClain’s voice and then create its own version. Then, both the tool and Mark read a script in a blind test in front of SailPoint employees. Even though they knew it was an experiment, a third of employees got it wrong—the fake AI voice was so good that one in three thought it was Mark.
It’s no wonder, then, that these types of impersonation scams are gaining traction across the UK. Just last summer, trusted consumer finance expert Martin Lewis fell victim to a deepfake video scam in which his computer-generated twin encouraged viewers to back a bogus investment project. Lewis described it as “frightening”, and you’d be hard-pressed not to agree. As the technology advances, cybercriminals are increasingly going to be able to breach people’s trust and jump existing security hurdles with ease.
Are you a pro? Subscribe to our newsletter
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The wider, real-world impact of generative AI
Many experts are also concerned about deepfake’s impact on public political opinions. Over the past fifteen years or so, we’ve seen how the internet and social media can sway real-world events—from the Obama team’s pioneering use of Facebook ahead of the 2008 US presidential election to the 2018 Cambridge Analytica personal data scandal. Plus, there are the everyday algorithms that control our exposure to ideas and information and therefore unconsciously shape our views. But as AI technology advances, the internet may have an increasingly overt effect on politics simply through the distribution of deepfake videos of politicians that are becoming tougher and tougher to distinguish from reality.
A 2023 Guardian article lists some notable examples of deepfake imagery and videos of political figures making striking, shocking, or bizarre statements, some of which may have tricked viewers into believing they were real. It argues that the technology risks dangerously disrupting how the public, and particularly those unsuspecting or unaware of its capabilities, views and trusts our world leaders. With both the US and UK elections set to take place in 2024, plus world events that have a huge economic and civic impact like the Paris Olympics also on the horizon, we must remain increasingly wary of the impact of these deepfakes into the new year and beyond.
Leveraging cutting-edge security features
In 2023, we saw cyber criminals ramp up their use of AI deepfake technology across a range of attack vectors. So, in 2024, the onus on potential victims to identify the real content in a sea of fakes will become even heavier. To combat this escalation, businesses will need to step up employee training on how to spot deepfakes, and they should also review and reinforce digital access rights, so employees, partners, contractors, and so on, only receive as much access to important data as their roles and responsibilities require. Data minimization—collecting only what is necessary and sufficient—will be essential as well.
Moving forward, it’s key that businesses use stronger forms of digital identity security. For instance, verifiable credentials, a form of identity that is a cryptographically signed proof that someone is who they say they are, could be used to “prove” someone’s identity rather than relying on sight and sound. In the event of a deepfake scam, proof could then be provided to ensure that the CEO or colleague is actually who they claim to be. Some emerging security tools now even leverage AI to defend against deepfakes, with the technology able to learn, spot, and proactively highlight the signs of fake video and audio to successfully thwart potential breaches. Overall, we’ve seen that businesses using AI and machine learning tools, along with SaaS and automation, scale as much as 30% faster and get more value for their security investment through increased capabilities.
Flying into the many faces of danger
Ultimately, stopping just one cybersecurity breach can save millions in lost revenue, regulatory fines, and reputational damage. Yet, more than nine in ten (91%) of IT professionals say that budgetary constraints are an obstacle to identity management security. However, with deepfakes forming a large part of today’s threat landscape, it’s not the time to try and save a few pounds. Business’s IT security teams must be given the tools they need to defend against these types of attacks.
Fortunately, as AI technology grows in accessibility, so too do security tools. Identity platforms, leveraging automation and AI, enable companies to scale identity-related capabilities up to 37% faster than companies without. As we move into 2024, investment into tools like these must be the only matter we take at face value.
We've listed the best ransomware protection.
This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here: https://www.techradar.com/news/submit-your-story-to-techradar-pro
Mike Kiser, Director of Strategy & Standards, SailPoint.
Someone finally tested China's x86 CPU answer to AMD and Intel — the 8-core Zhaoxin KX-7000 processor is promising, but can't reasonably compete for now
'A mobile phone as thin as a credit card': How massless batteries, similar to the human skeleton, could give rise to the world's strongest power cell and change the future of our society forever