ChatGPT is again being used to create malicious content

Illustrated image of a bot inside a computer with speech bubble
(Image credit: Getty)

It seems that the initial restrictions OpenAI placed on ChatGPT to prevent the tool from being used for malicious purposes didn’t do much, as crooks quickly found a way around them.

This is according to a new report from Check Point Research (CPR), which claims that even after the restrictions were imposed, crooks managed to use the AI writer to improve the code of a basic infostealer malware from 2019.

Revolutionizing the internet with conversation

ChatGPT is a chatbot built by OpenAI, which raised quite a few eyebrows for its conversational style and feats of creativity. Microsoft is already implementing it in its Edge web browser and search engine, Bing, promising a revolution in the way people use the internet.

There are two ways to use the tool, either via the web user interface (simple access to ChatGPT, DALLE-2, or the openAI playground), or through Application Programming Interfaces (API), which are used for building applications, processes, and the like. With the API, developers can use their own interface with the OpenAI models and data running in the background. 

Protecting your business from the biggest threats online

Protecting your business from the biggest threats online Perimeter 81's Malware Protection intercepts threats at the delivery stage to prevent known malware, polymorphic attacks, zero-day exploits, and more. Let your people use the web freely without risking data and network security.

Preferred partner (What does this mean?) 

While OpenAI placed solid restrictions for web interface users (for example, you can no longer ask the tool to write a phishing email impersonating a bank or a financial institution), restrictions are non-existent for the API approach, the researchers say.

“The current version of OpenAI´s API is used by external applications (for example, the integration of OpenAI’s GPT-3 model to Telegram channels) and has very few if any anti-abuse measures in place. As a result, it allows malicious content creation, such as phishing emails and malware code, without the limitations or barriers that ChatGPT has set on their user interface."

To make matters even worse, this is hardly CPR’s idea. Instead, the researchers say, there is an “active chatter” in the underground forums on this topic, meaning an increasing number of cybercriminals are already aware of the workaround for ChatGPT’s restrictions.

Sead Fadilpašić

Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.