ChatGPT hit by a zero-click, server-side vulnerability that criminals can use to siphon sensitive data - here's how to stay safe
Attackers can manipulate AI-driven workflows in unanticipated ways

- ChatGPT server-side flaw allows attackers to steal data without any user interaction
- ShadowLeak bypasses traditional endpoint security entirely
- Millions of business users could be exposed due to ShadowLeak exploits
Enterprises are increasingly using AI tools such as ChatGPT’s Deep Research agent to analyze emails, CRM data, and internal reports for strategic decision-making, experts have warned.
These platforms offer automation and efficiency but also introduce new security challenges, particularly when sensitive business information is involved.
Radware recently revealed a zero-click flaw in ChatGPT’s Deep Research agent, dubbed “ShadowLeak,” but unlike traditional vulnerabilities, this flaw exfiltrates sensitive data covertly.
ShadowLeak: a zero-click, server-side exploit
It allows attackers to exfiltrate sensitive data entirely from OpenAI servers, without requiring any interaction from users.
“This is the quintessential zero-click attack,” said David Aviv, chief technology officer at Radware.
“There is no user action required, no visible cue, and no way for victims to know their data has been compromised. Everything happens entirely behind the scenes through autonomous agent actions on OpenAI cloud servers.”
ShadowLeak also operates independently of endpoints or networks, making detection extremely difficult for enterprise security teams.
Sign up to the TechRadar Pro newsletter to get all the top news, opinion, features and guidance your business needs to succeed!
The researchers demonstrated that simply sending an email with hidden instructions could trigger the Deep Research agent to leak information autonomously.
Pascal Geenens, director of cyber threat intelligence at Radware, explained that “Enterprises adopting AI cannot rely on built-in safeguards alone to prevent abuse.
“AI-driven workflows can be manipulated in ways not yet anticipated, and these attack vectors often bypass the visibility and detection capabilities of traditional security solutions.”
The vulnerability represents the first purely server-side zero-click data exfiltration, leaving almost no evidence from the perspective of businesses.
With ChatGPT reporting over 5 million paying business users, the potential scale of exposure is substantial.
Human oversight and strict access controls remain critical when sensitive data is connected to autonomous AI agents.
Therefore, organizations adopting AI must approach these tools with caution, continuously evaluate security gaps, and combine technology with informed operational practices.
How to stay safe
- Implement layered cybersecurity defenses to protect against multiple types of attacks simultaneously.
- Regularly monitor AI-driven workflows to detect unusual activity or potential data leaks.
- Deploy the best antivirus solutions across systems to protect against traditional malware attacks.
- Maintain robust ransomware protection to safeguard sensitive information from lateral movement threats.
- Enforce strict access controls and user permissions for AI tools interacting with sensitive data.
- Ensure human oversight when autonomous AI agents access or process sensitive information.
- Implement logging and auditing of AI agent activity to identify anomalies early.
- Integrate additional AI tools for anomaly detection and automated security alerts.
- Educate employees on AI-related threats and the risks of autonomous agent workflows.
- Combine software defenses, operational best practices, and continuous vigilance to reduce exposure.
You might also like
- These are the best firewall offerings around today
- These are the best VPNs with antivirus you can use right now
- Microsoft announces "world's most powerful data center" in billion-dollar AI splurge

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking. Efosa developed a keen interest in technology policy, specifically exploring the intersection of privacy, security, and politics. His research delves into how technological advancements influence regulatory frameworks and societal norms, particularly concerning data protection and cybersecurity. Upon joining TechRadar Pro, in addition to privacy and technology policy, he is also focused on B2B security products. Efosa can be contacted at this email: udinmwenefosa@gmail.com
You must confirm your public display name before commenting
Please logout and then login again, you will then be prompted to enter your display name.