Cybercriminals are exploiting AI tools like ChatGPT to craft more convincing phishing attacks, alarming cybersecurity experts

Man using download manager on laptop
Image credit: Unsplash (Image credit: Unsplash)

If you’ve noticed a spike in suspicious-looking emails in the last year or so, it might be partly due to one of our favorite AI chatbots - ChatGPT. I know - plenty of us have had intimate and private conversations where we’ve learned about ourselves with ChatGPT, and we don’t want to believe ChatGPT would help scam us. 

According to cybersecurity firm SlashNext, ChatGPT and its AI cohorts are being used to pump out phishing emails at an accelerated rate. The report is founded on the firm’s threat expertise and surveyed more than three hundred cybersecurity professionals in North America. Namely, it’s claimed that malicious phishing emails have increased by 1,265% - specifically credential phishing, which rose by 967% - since the fourth quarter of 2022. Credential phishing targets your personal information like usernames, IDs, passwords, or personal pins by impersonating a trusted person, group, or organization through emails or a similar communication channel.

Malicious actors are using generative artificial intelligence tools, such as ChatGPT, to compose polished and specifically targeted phishing messages. As well as phishing, business email compromise (BEC) messages are another common type of cybercriminal scam, aiming to defraud companies of finances. The report concludes that these AI-fueled threats are ramping up at breakneck speed, growing rapidly in volume and how sophisticated they are. 

The report indicates that phishing attacks averaged at 31,000 per day and approximately half of the surveyed cybersecurity professionals reported that they received a BEC attack. When it comes to phishing, 77% of these professionals reported that they received phishing attacks. 

small business security

(Image credit: Getty Images)

The experts weigh in

SlashNext’s CEO, Patrick Harr, relayed that these findings “solidify the concerns over the use of generative AI contributing to an exponential growth of phishing.” He elaborated that AI generative tech enables cybercriminals to turbocharge how quickly they pump out attacks, while also increasing the variety of their attacks. They can produce thousands of socially engineered attacks with thousands of variations - and you only need to fall for one. 

Harr goes on to point the finger at ChatGPT,  which saw momentous growth towards the end of last year. He posits that generative AI bots have made it a lot easier for novices to get into the phishing and scamming game, and have now become an extra tool in the arsenal of those more skilled and experienced - who can now scale up and target their attacks more easily. These tools can help generate more convincing and persuasively worded messages that scammers hope will phish people right up.

Chris Steffen, a research director at Enterprise Management Associates, confirmed as much when speaking to CNBC, stating, “Gone are the days of the ‘Prince of Nigeria’”. He went on to expand that emails are now “extremely convincing and legitimate sounding.” Bad actors persuasively mimic and impersonate others in tone and style, or even send official-looking correspondence that looks like it’s from government agencies and financial services providers. They can do this better than before by using AI tools to analyze the writings and public information of individuals or organizations to tailor their messages, making their emails and communications look like the real thing.

What’s more, there’s evidence that these strategies are already seeing returns for bad actors. Harr refers to the FBI’s Internet Crime Report, where it’s alleged that BEC attacks have cost businesses around $2.7 billion, along with $52 million in losses due to other kinds of phishing. The motherlode is lucrative, and scammers are further motivated to multiply their phishing and BEC efforts. 

Person writing on computer.

(Image credit: Glenn Carstens-Peters / Unsplash)

What it will take to subvert the threats

Some experts and tech giants push back, with Amazon, Google, Meta, and Microsoft having pledged that they will carry out testing to fight cybersecurity risks. Companies are also harnessing AI defensively, using it to improve their detection systems, filters, and such. Harr reiterated that SlashNext’s research, however, underscores that this is completely warranted as cybercriminals are already using tools like ChatGPT to enact these attacks.

SlashNext found a particular BEC in July that used ChatGPT, accompanied by WormGPT. WormGPT is a cybercrime tool that’s publicized as “a black hat alternative to GPT models, designed specifically for malicious activities such as creating and launching BEC attacks,” according to Harr. Another malicious chatbot, FraudGPT, has also been reported to be circulating. Harr says FraudGPT has been advertised as an ‘exclusive’ tool tailored for fraudsters, hackers, spammers, and similar individuals, boasting an extensive list of features.

Part of SlashNext’s research has been into the development of AI “jailbreaks” which are pretty ingeniously designed attacks on AI chatbots that when entered cause the removal of AI chatbots’ safety and legality guardrails. This is also a major area of investigation at many AI-related research institutions.

Workers at computers in an office

(Image credit: Unsplash / Israel Andrade)

How companies and users should proceed

If you’re feeling like this could pose a serious threat professionally or personally, you’re right - but it’s not all hopeless. Cybersecurity experts are stepping up and brainstorming ways to counter and respond to these attacks. One measure that many companies carry out is ongoing end-user education and training to see if employees and users are actually being caught out by these emails. 

The increased volume of suspicious and targeted emails does mean that a reminder here and there may no longer be enough, and companies will now have to very persistently practice putting security awareness in place among users. End users should also be not just reminded but encouraged to report emails that look fraudulent and discuss their security-related concerns. This doesn’t only apply to companies and company-wide security, but to us as individual users as well. If tech giants want us to trust their email services for our personal email needs, then they’ll have to continue building their defenses in these sorts of ways. 

As well as this culture-level change in businesses and firms, Steffen also reiterates the importance of email filtering tools that can incorporate AI capabilities and help prevent malicious messages from even making it to users. It’s a perpetual battle that demands regular tests and audits, as threats are always evolving, and as the abilities of AI software improve, so will the threats that utilize them. 

Companies have to improve their security systems and no single solution can fully address all the dangers posed by AI-generated email attacks. Steffen puts forth that a zero-trust strategy can help fill control gaps caused by the attacks and help provide a defense for most organizations. Individual users should be more alert to the possibility of being phished and tricked, because it has gone up.

It can be easy to give into pessimism about these types of issues, but we can be more wary of what we choose to click on. Take an extra moment, then another, and check out all the information - you can even search the email address you received a particular email from and see if anyone else has had problems related to it. It’s a tricky mirror world online, and it’s increasingly worthwhile to keep your wits about you.

You might also like

Computing Writer

Kristina is a UK-based Computing Writer, and is interested in all things computing, software, tech, mathematics and science. Previously, she has written articles about popular culture, economics, and miscellaneous other topics.


She has a personal interest in the history of mathematics, science, and technology; in particular, she closely follows AI and philosophically-motivated discussions.