Whisper it - Microsoft uncovers sneaky new attack targeting top LLMs to gain access to encrypted traffic

Whisper Leak
(Image credit: The Hacker News)

  • Microsoft finds Whisper Leak shows privacy flaws inside encrypted AI systems
  • Encrypted AI chats may still leak clues about what users discuss
  • Attackers can track conversation topics using packet size and timing

Microsoft has revealed a new type of cyberattack it has called "Whisper Leak", which is able to expose the topics users discuss with AI chatbots, even when conversations are fully encrypted.

The company’s research suggests attackers can study the size and timing of encrypted packets exchanged between a user and a large language model to infer what is being discussed.

"If a government agency or internet service provider were monitoring traffic to a popular AI chatbot, they could reliably identify users asking questions about specific sensitive topics," Microsoft said.

Whisper Leak attacks

This means "encrypted" doesn’t necessarily mean invisible - with the vulnerability lies in how LLMs send responses.

These models do not wait for a complete reply, but transmit data incrementally, creating small patterns that attackers can analyze.

Over time, as they collect more samples, these patterns become clearer, allowing more accurate guesses about the nature of conversations.

This technique doesn’t decrypt messages directly but exposes enough metadata to make educated inferences, which is arguably just as concerning.

Following Microsoft’s disclosure, OpenAI, Mistral, and xAI all said they moved quickly to deploy mitigations.

One solution adds a, “random sequence of text of variable length” to each response, disrupting the consistency of token sizes that attackers rely on.

However, Microsoft advises users to avoid sensitive discussions over public Wi-Fi, using a VPN, or sticking with non-streaming models of LLMs.

The findings come alongside new tests showing that several open-weight LLMs remain vulnerable to manipulation, especially during multi-turn conversations.

Researchers from Cisco AI Defense found even models built by major companies struggle to maintain safety controls once the dialogue becomes complex.

Some models, they said, displayed “a systemic inability… to maintain safety guardrails across extended interactions.”

In 2024, reports surfaced that an AI chatbot leaked over 300,000 files containing personally identifiable information, and hundreds of LLM servers were left exposed, raising questions about how secure AI chat platforms truly are.

Traditional defenses, such as antivirus software or firewall protection, cannot detect or block side-channel leaks like Whisper Leak, and these discoveries show AI tools can unintentionally widen exposure to surveillance and data inference.


Best identity theft protection header
The best ID theft protection for all budgets

➡️ Read our full guide to the best identity theft protection
1. Best overall:
Aura
2. Best for families:
IdentityForce
3. Best for credit beginners:
Experian IdentityWorks

Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button!

And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too.

Efosa Udinmwen
Freelance Journalist

Efosa has been writing about technology for over 7 years, initially driven by curiosity but now fueled by a strong passion for the field. He holds both a Master's and a PhD in sciences, which provided him with a solid foundation in analytical thinking.

You must confirm your public display name before commenting

Please logout and then login again, you will then be prompted to enter your display name.