AI has opened a new front in the war with cybercrime

Image of padlock against circuit board/cybersecurity background
(Image credit: Future)

Year after year, we see attackers, ranging from entry-level hackers to nation-state cyber armies, add new tactics, techniques and procedures (TTPs) to their cyberattack playbooks. One set of TTPs may not necessarily be more sophisticated than another, but they each present a new strategy for businesses to be concerned about. And all of them have proven extremely effective at penetrating targets’ networks, stealing data and leaving behind a costly mess in their wake for victims to clean up.

About the author

Joshua Saxe is VP and Chief Scientist at Sophos.

Just as AI is reshaping the landscape of legitimate technology, it’s also poised to reshape cybercrime technology. Indeed, in the last couple years, we’ve seen a real change in the open-source landscape, where AI has become both democratized and commoditized. Anyone, with benign or malicious intent, can now build and use AI models to generate convincing text, imagery, voice mimicry, and Deepfake videos, and to make predictions about what content Internet users will respond to. The launch codes for these tools, and instructions on how to use them, are widely accessible. It doesn’t take a crystal ball to deduce that cyberattackers are going to capitalize on this to steal data and make money.

Luckily, AI has not yet emerged as a major part of the cyberattack portfolio. The fact that today’s cybercriminals aren’t using AI at scale to carry out their attacks is the good news. The bad news is that they likely will very soon, and the impacts of those AI-fueled attacks could be extremely damaging.

Here are a few potential ways in which AI is about to open a new front in the war with cybercrime.

AI’s potential applications in cybercrime

Legitimate software developers are using AI to solve a diverse set of problems, everything from recommending driving routes, music and products, to predicting who you’ll want to ‘friend’ on social media, which machine will fail in factories and which crops will grow best on what land. Just as technologists continuously surprise us with new, positive applications of AI, we should expect that attackers will find diverse and unexpected ways of leveraging this technology. Still, a few use cases stick out as obvious and ominous.

First, it’s clear that Deepfakes will play a role in the social engineering attacks of the near future. Attackers, of course, have engaged in social engineering attacks for decades. But while security teams, cybersecurity products, and end users have become reasonably adept at recognizing standard phishing emails and malicious web pages, the threat becomes a lot less obvious when attackers use AI to create Deepfake videos or voicemails with AI-reproduced voice mimicry that look and sound very much like your boss. Suddenly that request to wire them money doesn’t seem so phony when you’re actually hearing their voice or seeing their face.

The Deepfake problem has only become more exacerbated thanks to the data trails that modern Internet users often leave behind. It’s typically not hard to find a video of corporate leaders on content sharing platforms, which attackers could use to seed a Deepfake generator tool to impersonate that person. It’s no wonder that in a report issued earlier this year, the FBI warned that synthetic content like Deepfakes and voice mimicry will be “increasingly used by foreign and criminal cyber actors for spear-phishing and social engineering” over the next 12-18 months, in what the bureau calls “an evolution of cyber operational tradecraft.”

The implications of spear-phishing

The spear-phishing implications don’t stop there. Attackers could just as well use artificial neural networks to spin up thousands of social media sock puppet accounts, with profile photos that look like real people but are actually computer-generated phonies. These sock puppets could be directed to spear-phish employees in diverse ways that are hard to detect and block en masse, or could be used to threaten a corporation’s reputation through swarms of defamatory tweets.

Beyond Deepfakes and sock puppets, attackers may begin to use AI wherever predictions are relevant to their malicious tactics. For example, it’s not hard to imagine attackers training machine learning systems to identify which organizations may be most vulnerable to specific kinds of phishing attacks based on employees’ LinkedIn profiles, in the same way that Netflix predicts which users will respond positively to video content based on their behavioral profiles. While it’s likely attackers’ use of machine learning to make predictions like these (and optimize their attack workflows) will surprise us in their ingenuity, we can be nearly certain that they will adopt the same kinds of predictive modeling tools that legitimate digital businesses are already using, honed to their own purposes.

While the technology to make many of these dark visions feasible is, or is quickly becoming, accessible to attackers, the good news is it isn’t being used yet, at least not at scale. Additionally, cybersecurity experts have been keeping pace with bad actors in developing defensive applications for AI – something I’ll delve into in my next piece. But we should be clear-eyed about the problem we face, too. AI is about to become the next big front in cybercrime. It’s imperative that business and IT leaders start planning accordingly.

At TechRadar Pro, we've featured the best malware removal.

Joshua Saxe is VP and Chief Scientist at Sophos. He leads the data science team with a particular focus on inventing, evaluating and deploying deep learning detection models in support of a next-gen endpoint security solutions.