In many ways, cybersecurity has always been a contest; vendors race to develop security products that can identify and mitigate any threats, while cybercriminals aim to develop malware (opens in new tab) and exploits capable of bypassing protections.
With the emergence of artificial intelligence (opens in new tab) (AI), however, this combative exchange between attackers and defenders is about to become more complex and increasingly ferocious.
According to Max Heinemeyer, Director of Threat Hunting at AI security firm Darktrace, it is only a matter of time before AI is co-opted by malicious actors to automate attacks and expedite the discovery of vulnerabilities.
- Here's our list of the best endpoint protection (opens in new tab) services available
- Check out our list of the best identity management (opens in new tab) services right now
- We've built a list of the best patch management (opens in new tab) solutions out there
“We don’t know precisely when offensive AI will begin to emerge, but it could already be happening behind closed doors,” he told TechRadar Pro.
“If we are able to [build complex AI products] here in our labs with a few researchers, imagine what nation states that invest heavily in cyberwar could be capable of.”
When this trend starts to play out, as seems inevitable, Heinemeyer says cybersecurity will become a “battle of the algorithms”, with AI pitted against AI.
The legacy approach
Traditionally, antivirus (opens in new tab) products have relied on a signature-based approach to shielding against malware. These services use a database of known threats to identify incoming attacks.
However, the consensus in recent years has been that intelligence-based services are ill-equipped to handle the pace of the modern threat landscape. In other words, as new threat types and attack vectors emerge, these legacy tools are powerless until updated with new intelligence, by which time it is too late.
This problem will only be aggravated by the emergence of offensive AI, which will allow cybercriminals to automate attacks in a way never before seen, as well as to identify potential exploits at a faster rate.
An example of a contemporary malware campaign capable of eluding signature-based security solutions is Emotet, a loader botnet that was recently taken down (opens in new tab) in a sting operation that spanned multiple international intelligence agencies.
“Emotet is really interesting because it was so resilient and its structure extremely modular. It used different levels of backups and command and control servers (opens in new tab), some of which were even peer-to-peer,” Heinemeyer explained.
“Basically it was really hard to track because it was constantly evolving. Even if you managed to find and blacklist the malicious infrastructure, its signature would switch.”
The malware also spread extremely rapidly between devices. When it infected a machine, Emotet would harvest contact details stored locally for use in further email phishing attacks. It also operated on the network layer, attempting to brute force its way into other computers with weak password (opens in new tab) protection.
Emotet operators monetized their operation by selling access to compromised devices, which other threat actors might infect with secondary malware or ransomware (opens in new tab). Other types of botnets are used to execute massive DDoS (opens in new tab) attacks, with the goal of disrupting the operations of major organizations.
The larger a botnet grows, the more powerful it becomes. And with the volume of connected devices in circulation expanding rapidly, the potential scope of future botnets is practically limitless.
“With an increased global digital landscape, we expect incidences of botnets to increase. Perhaps not botnets like Emotet, which go after non-IoT infrastructure, but [this trend] certainly opens the door for hackers to capitalize on the increased complexity,” said Heinemeyer.
The next frontier
To tackle fast-moving malware and increasingly complex threats, security firms such as Darktrace are using AI to automate detection and mitigation.
Where Darktrace differs from its rivals, however, is in its use of unsupervised machine learning (as opposed to supervised machine learning), which does not involve training the system on existing datasets.
Instead, the platform plugs into an environment and makes various measurements to establish a definition of normal. Equipped with this information, the system is able to flag any abnormal activity on a device or network that might indicate a cyberattack or existing compromise.
And as the definition of normal changes within any given network, as happened when businesses were forced to transition to remote working (opens in new tab) in the spring of last year, the system reacts and recalibrates.
“The shift to remote working has been fascinating from an architectural perspective, because people are working differently and the threat landscape has changed,” said Heinemeyer.
“All of a sudden, there was just barebones server traffic in the office, but VPN (opens in new tab) activity went through the roof. But after the first week, we had a sense of what the new normal would look like.”
For now, says Heinemeyer, the cybersecurity industry has the upper hand, but this may not always be the case.
“We firmly believe we need fire to fight fire. Tools that look at yesterday’s attacks cannot even attempt to fight the automated attacks of tomorrow.”
“It might sound futuristic, but we will need AI to fight AI.”
- Here's our list of the best firewall (opens in new tab) out there