AI and ChatGPT are scary, according to cybercriminals

Illustrated image of a bot inside a computer with speech bubble
(Image credit: Getty)

Many cybercriminals are skeptical about the use of AI-based tools such as ChatGPT to automate their malicious campaigns. 

A new Sophos investigation looked to gauge the interests of cybercriminals by analyzing dark web forums. Apparently, there are many safeguards in place in tools such as ChatGPT, which prevent hackers from automating the creation of malicious landing pages, phishing emails, malware code, and more.

That forced the hackers to do two things: try and compromise premium ChatGPT accounts (that, as the research suggests, come with fewer restrictions), or pivot towards GhatGPT derivatives - cloned AI writers that hackers built to circumvent the safeguards.

Reader Offer: $50 Amazon gift card with demo

Reader Offer: $50 Amazon gift card with demo
Perimeter 81's Malware Protection intercepts threats at the delivery stage to prevent known malware, polymorphic attacks, zero-day exploits, and more. Let your people use the web freely without risking data and network security.

Preferred partner (What does this mean?

Poor results and plenty of skepticism

But many are wary of the derivatives, fearing that they might have been built just to trick them. 

“While there’s been significant concern about the abuse of AI and LLMs by cybercriminals since the release of ChatGPT, our research has found that, so far, threat actors are more skeptical than enthused,” says Ben Gelman, senior data scientist, Sophos. “Across two of the four forums on the dark web we examined, we only found 100 posts on AI. Compare that to cryptocurrency where we found 1,000 posts for the same period.”

While the researchers did observe attempts at creating malware or other attack tools using AI-powered chatbots, the results were “rudimentary and often met with skepticism from other users,” said Christopher Budd, director, X-Ops research, Sophos. 

“In one case, a threat actor, eager to showcase the potential of ChatGPT inadvertently revealed significant information about his real identity. We even found numerous ‘thought pieces’ about the potential negative effects of AI on society and the ethical implications of its use. In other words, at least for now, it seems that cybercriminals are having the same debates about LLMs as the rest of us,” Budd added.

More from TechRadar Pro

Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.