ChatGPT is now being used to make scams much more dangerous

Illustrated image of a bot inside a computer with speech bubble
(Image credit: Getty)

Scams on the internet might get a lot more dangerous now, thanks to fraudsters having unobstructed access to ChatGPT, the AI-powered chatbot that never seems to leave the headlines.

That's according to a report published earlier this month by cybersecurity researchers Norton. In it, the company laid out three key ways threat actors could abuse ChatGPT to make internet scams more effective: through deepfake content generation, phishing at scale, and faster malware creation.

Norton also argues that the ability to generate “high-quality misinformation or disinformation at scale” could assist bot farms in stoking division more efficiently, allowing threat actors to “sow mistrust and shape narratives in different languages” with ease.

Battling misinformation

Fraudsters looking to manage fake reviews could also have a field day with ChatGPT, they say, by generating them en-masse and in different tones of voice.

Finally, the already-famed chatbot could be used in “harassment campaigns” on social media, to silence or bully people, Norton says, adding that the consequences could be “chilling”: 

Hackers can also use ChatGPT in phishing campaigns, which, in many cases, are run by attackers without native grasp of the English language, which helps victims spot an obvious scam attempt in the case of poor spelling and grammar. With ChatGPT, threat actors could create very convincing emails, at scale.

Finally, coding malware might no longer be reserved for seasoned hackers. “With the right prompt, novice malware authors can describe what they want to do and get working code snippets,” the researchers said.

Consequently, we might witness an uptick in the numbers, and sophistication, of malware, they say. Plus, with ChatGPT’s ability to “translate” source code into less-common programming languages quickly and easily, more malware could make it past antivirus solutions.

As with any new tool before it, ChatGPT will most likely be used by scammers and hackers to advance their goals, too. It’s up to the users, as well as the wider cybersecurity community, to provide the answers to these new threats, the researchers concluded. 

Sead Fadilpašić

Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.