ChatGPT's Python code writer has a major security hole that could let hackers steal your data

Screenshot of ChatGPT
(Image credit: Unsplash)

If you’re a programmer using ChatGPT to write or analyze Python code, be very careful about the URLs you paste into the generative AI tool, as there's a way for hackers to steal sensitive data from your projects this way. 

The theory was first reported by security researcher Johann Rehberger and later tested and confirmed by Avram Piltch at Tom’s Hardware.

ChatGPT can analyze, and then write, Python code if it’s given the right instructions. These instructions can be uploaded to the platform in a .TXT file, or even in a .CSV, if you’re looking for data analysis. The platform will store the files there (including any sensitive information like API keys and passwords - a common practice), in a newly generated virtual machine.


Reader Offer: $50 Amazon gift card with demo

Reader Offer: $50 Amazon gift card with demo
Perimeter 81's Malware Protection intercepts threats at the delivery stage to prevent known malware, polymorphic attacks, zero-day exploits, and more. Let your people use the web freely without risking data and network security.

Preferred partner (What does this mean?

Grabbing malicious instructions

Now, ChatGPT can do a similar thing with web pages. If a web page has certain instructions on it, when a user pastes the URL in the chatbox, the platform will run them. If the website’s instructions are to grab all of the contents from files stored in the VM and extract them to a third-party server, it will do just that. 

Piltch tested the idea, first uploading a TXT file with a fake API key and password, and then creating a legitimate website (a weather forecast site) which, in the background, instructed ChatGPT to take all the data, turn it into a long line of URL-encoded text, and send it to a server under his command. 

The trick is that a threat actor cannot instruct ChatGPT to grab just anyone’s data - the platform will only do it for the person who pasted the URL into the chatbox. That means the victim needs to be convinced to paste a malicious URL into their ChatGPT chatbox. Alternatively, someone could hijack a legitimate website and add malicious instructions.

More from TechRadar Pro

Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.

TOPICS