How GenAI could give threat actors a disarming advantage

A digital face in profile against a digital background.
(Image credit: Shutterstock / Ryzhi)

Humans are fundamentally social creatures. And language lies at the heart of how we socialise and communicate. It is the basis of understanding and therefore coexistence. Whether we know it or not, most of us speak two “languages”: the language of officialdom and business, and the dialect that’s spoken in the region where we grew up. When hearing or reading the latter, it can disarm us; making us feel closer to the person writing or speaking it.

The challenge with generative AI (GenAI) is that it gives threat actors with little grasp of such linguistic subtleties the ability to get inside our heads. It could further bolster their efforts to socially engineer victims, and conduct convincing fraud and disinformation campaigns.

Richard Werner

European Business Consultant at TrendMicro.

The language of cybercrime

Reading the dialect of our birthplace or childhood can have a strange psychological effect on many of us. It creates a sense of empathy with the person writing it. Even when we see it being artificially generated by GenAI it can have a similar impact.

However, there are unfortunately also opportunities here for threat actors. Take phishing. It still ranks as one of the top threat vectors for cyber-attacks, representing nearly a quarter of all ransomware compromises in Q4 2023. Fundamentally, it relies on social engineering: the ability of the fraudster to manipulate their victim into doing their bidding. They might do so by using official logos and sender domains. But language also plays a key role.

This is where GenAI could give opportunistic threat actors a leg-up. Writing phishing missives in a dialect the recipient instantly understands could raise trust levels and trick the victim into believing what they are being told. Now, this is unlikely to work in an enterprise setting. But it could be used in scams targeting consumers. GenAI is already predicted to supercharge phishing by generating grammatically perfect content in multiple languages. Why not multiple dialects too?

The same logic could see scammers use GenAI to gain the trust of their victims in romance and other confidence fraud types. The use of dialects could play a critical role in overcoming our increasingly skeptical attitude to people we meet online. It’s a cybercrime that already cost victims $734m in 2022, according to the FBI. But the bad guys are always looking for innovative ways to increase their haul.

Building bombs and faking news

Another threat looms large this year: misinformation/disinformation. Together, they were recently ranked by the World Economic Forum (WEF) as the number one global risk of the next two years. With around a quarter of the world’s population heading to the polls in 2024, there are growing concerns that nefarious actors will try to swing results towards their favored candidates, or undermine confidence in the entire democratic process. And while more seasoned internet users are becoming increasingly dubious about the news they read online, dialect could once again be a trump card for threat actors.

First, it is not widely used. That means we may pay more attention to content written in a specific dialect. We might read a social media post written in dialect, even if just for the joy of being able to decipher what it means. If it’s our own dialect, we might feel instantly closer to the person – or machine – that posted it. Politicians and cybersecurity experts may warn us about election interference from foreigners. But what could be less “foreign” than an account posting in a local or regional dialect close to home?

Finally, consider how dialects may allow threat actors to “jailbreak” GenAI systems. Researchers at Brown University in the US used rarely spoken languages like Gaelic to do exactly this to ChatGPT. The OpenAI chatbot has specific safety guardrails designed into it — such as refusing to give a user instructions on how to build a bomb. However, when the researchers asked ChatGPT in rare languages to do unethical things, they were able to access the forbidden information. According to media reports, Open AI is aware of, and already undertaking steps to mitigate, the risk. But we must remember that although GenAI seems “intelligent”, it can sometimes have the naivety of a four-year-old.

Time to educate

So what’s the solution? Certainly, AI developers must build better protections against abuse of GenAI’s dialect-generating capabilities. But users may also need to improve their understanding of potential threats, and ramp up their skepticism of what they read and watch online. Companies should include dialect in their anti-phishing/fraud training programs. And governments and industry bodies may want to run public awareness campaigns more widely. As GenAI is increasingly used for malicious purposes, poor language skills may even in time become a sign of credibility in written communication.

That isn’t where we are right now. But as cybersecurity professionals, we have to acknowledge that it could be soon.

We've featured the best AI Writer.

This article was produced as part of TechRadarPro's Expert Insights channel where we feature the best and brightest minds in the technology industry today. The views expressed here are those of the author and are not necessarily those of TechRadarPro or Future plc. If you are interested in contributing find out more here:

Richard Werner is European Business Consultant at TrendMicro.