Generative AI's impact on phishing attacks

Cybersecurity in action.
(Image credit: iStock)

Generative AI, specifically ChatGPT, hit the global technology scene like a tidal wave at the end of 2022 – and that wave has just been growing bigger throughout 2023.

While not a new tool, generative AI has taken center stage as the next technology to revolutionize humanity's ways of living and working. But along with this, we’ve seen a backlash against AI with an emphasis on the perceived perils – one of these is the use of AI for nefarious or malicious purposes, particularly cyber-crime.

Due to widespread accessibility, generative AI has upgraded threat actors' email phishing capabilities to be much more effective.

Earlier this year, Darktrace published research demonstrating that while the number of email phishing attacks across our customer base has remained steady since ChatGPT's release, those that rely on tricking victims into clicking malicious links have declined. At the same time, linguistic complexity, including text volume, punctuation, and sentence length has increased. We also found a 135% increase in ‘novel social engineering attacks’ across thousands of active Darktrace/Email customers from January to February 2023, corresponding with the widespread adoption of ChatGPT.

The trend raises concerns that generative AI, such as ChatGPT, could be providing an avenue for threat actors to craft sophisticated and targeted attacks at speed and scale – like an email which looks like it comes from your boss, with the correct spelling, grammar, punctuation and tone.

Max Heinemeyer

Chief Product Officer, Darktrace.

How the generative AI email landscape is evolving

Most recently, between May and July this year, Darktrace has seen changes in attacks that abuse trust. The malicious email no longer actually looks like it came from your boss, it looks like it came from the IT team. Our researchers discovered that while VIP impersonation – phishing emails that mimic senior executives – decreased 11%, email account takeover attempts have risen by 52% and impersonation of the internal IT team increased by 19%

The changes are typical of attacker behavior: switching up tactics to evade defenses. The findings suggest that as employees have become better attuned to the impersonation of senior executives, attackers are pivoting to impersonating IT teams to launch their attacks. With generative AI at their fingertips, we might see the problem develop with tools that help increase linguistic sophistication and highly realistic voice deep fakes used to trick employees with greater success.

With email compromise remaining the primary source of business vulnerability, generative AI has added a new layer of complexity to cyber defense. As generative AI becomes more mainstream – across images, audio, video, and text – we can only expect to see trust in digital communications continue to erode.

It's not all doom and gloom: AI can also be harnessed for good

While the news agenda is dominated by the negative aspects of AI and cybersecurity, it is important to remember that no AI is inherently bad, but how humans apply it can create bad outcomes, such as being abused by cyber attackers. But crucially, humans – specifically, cyber security teams – can augment themselves with AI for good, to help fight off cyber-attacks, whether AI-powered or not.

Defensive AI that knows the business and understands employee behaviour – AI that self-learns and analyses normal communication patterns such as an employee’s tonality and sentence length – can determine for each email whether it is suspicious or legitimate. By recognizing these subtle nuances, it will always be stronger than attackers' AI trained solely on globally available data. Put simply, the way for defenders to stop hyper-personalized, AI-powered email attacks, is to have an AI that knows more about your business than external, generative AI ever could.

At the end of the day, the cyber security battle is still human vs human. No matter the outcome, there is a sinister threat actor moving behind a screen – AI is actually blameless in this regard. We need to ensure that as security teams, we are looking to AI to help solve our security woes not just as a threat vector. If we can collectively achieve this, we stand a fighting chance against AI-powered threats.

We've listed the best identity management software.

Max Heinemeyer

Max Heinemeyer, Chief Product Officer, Darktrace.