AI’s role in the future of cybersecurity

A digital face in profile against a digital background.
(Image credit: Shutterstock / Ryzhi)

AI is revolutionizing cybersecurity. From automatically detecting network irregularities, to deciding how best to allocate security defenses, some of the most data-intensive tasks are rapidly being taken over by machines that can compute at faster and higher rates than people.

About the author

Joshua Saxe is VP and Chief Scientist at Sophos.

Cybercriminals however, know this. While AI has not been a major tool for attackers thus far, it has potential. Even now, the early examples of attackers using new, easily accessible open-source AI technology to create fake photos, videos and speech as part of phishing campaigns suggests a future where AI is widely used by criminals and nation-state cyber actors.

But AI can be used for good too. Just as attackers will fold it into their attack methods, security researchers have spent years creating defensive applications for AI. This isn’t a “fight fire with fire” approach, though. AI-backed security doesn’t necessarily thwart AI-backed attacks, or vice versa. But what AI can bring to the table is to provide a broad boost in efficacy to cybersecurity products and services, helping organizations deflect, isolate or prevent attacks from an increasingly complex threat landscape.

AI for pattern matching and threat detection

Until the past half decade or so, most cyber threat detection was performed using small, hand-written pattern matching programs called “signatures”. The widespread adoption of AI has changed this. Now, security vendors are on a long march to augment signature-based detection technology with AI in every context for making detections: detecting phishing emails, malicious mobile apps, malicious command executions and the like. This should come as no surprise, after all, AI-enabled analytics has even helped to discern the jargon and code words hackers develop to refer to their new tools, techniques and procedures. It was AI that discovered the term ‘mirai’ was being used by hackers to mean ‘botnet’.

This doesn’t mean, however, that AI will replace signatures. In fact, replacing signature techniques with AI can increase detection rates – but will also lead to false positives as they do. To avoid this, the two technologies should be combined together complementarily. Whereas signatures are good at detecting known threats, AI algorithms are better at detecting previously unseen threats, thanks to their cybersecurity learning. Whereas signatures can be written and deployed quickly, AI technologies take a lot longer to train and deploy. And, while signature authors can control precisely what threats their signatures will and won’t detect, AI is fundamentally probabilistic and harder to control.

The good news about this trend of combining AI with signatures is that it’s making a significant difference in our ability to detect cyberattacks, particularly ransomware, which was responsible for some of the biggest cyber incidents of the past year, including Colonial Pipeline, Kaseya and Kronos.

The future of AI in cybersecurity

Unfortunately, there isn’t much exploration beyond the narrow use case of AI being applied to detect attacks before they happen. But from optimizing and monitoring data centers, reducing the cost of hardware maintenance, and improving network security, security experts should keep pace with the latest developments. In the future, it’ll be necessary to explore new application areas of AI that can strengthen the lines against attack.

This is challenging, because it requires that cybersecurity leaders keep track of the rapidly evolving AI R&D space just as we track trends in cybersecurity practice and cybersecurity threats. But it’s too important a priority to forsake.

Some areas that the defensive cybersecurity community needs, urgently, to focus on, include:

  • AI models that can accurately predict which security cases analysts truly care about, and then intuitively cue up relevant information for security operators.
  • A natural language and visualization user interface, not unlike the way you can search for COVID-19 case numbers and Google will return the results in a neatly visualized case tracker graph. These technologies will surface and visualize relevant information during “live fire” cybersecurity incidents.
  • Natural language AI chatbots capable of understanding and answering open-ended facts or questions as they pertain to security incident response and investigation workflows.

Artificial intelligence and machine learning is a double-edged sword. While it can improve security, it can simultaneously make it easier for cybercriminals to penetrate systems with no human intervention. While we can count on cyber adversaries to get creative and act boldly in applying AI to their malware packages, AI should not be the tools of attackers alone. We need to continue to incrementally improve the AI we’re already using to improve cyberattack detection. And with the rapidly evolving and complex threat landscape we face, CIOs, CTOs, and IT and SecOps teams have to commit to exploring new and creative ways of applying AI technology that focus on helping the human operators that our network security ultimately depends on.

We've featured the best malware removal.

Joshua Saxe is VP and Chief Scientist at Sophos. He leads the data science team with a particular focus on inventing, evaluating and deploying deep learning detection models in support of a next-gen endpoint security solutions.