Protecting LLM chats from the eavesdropping Whisper Leak attack | Kaspersky official blog
How an attacker can find out the topic of your chats with an AI assistant without hacking your computer, and what you can do to guard against this threat
AI Analysis
Technical Summary
The Whisper Leak attack is a side-channel vulnerability targeting AI chat confidentiality by analyzing encrypted network traffic patterns rather than decrypting data. Large language models generate responses token-by-token in a streaming fashion, causing distinct timing and packet size patterns correlated with the semantic content of the conversation. Researchers at Microsoft analyzed 30 AI models with nearly 12,000 prompts, demonstrating the ability to classify 'dangerous' topics such as illegal activities with high accuracy (71%-100%) by measuring server response delays, packet sizes, and counts. In more realistic datasets, detection rates varied by model but remained significant, with some models allowing 50% detection success with zero false positives. The attack requires an adversary to have access to network traffic between the user and AI servers, such as control over ISP infrastructure or internal organizational networks. However, the attacker must train detection algorithms on specific topics, limiting the breadth of feasible surveillance. Providers have responded by adding packet padding and altering server response algorithms to obfuscate timing patterns, reducing attack efficacy. Users can further mitigate risk by using local AI models, disabling streaming output to send responses in bulk, avoiding sensitive topics on untrusted networks, and using VPNs. The attack does not compromise the AI model or user devices directly but threatens confidentiality by revealing conversation topics through traffic analysis.
Potential Impact
For European organizations, the Whisper Leak attack poses a privacy risk by potentially exposing sensitive AI chat topics to network-level adversaries without breaching encryption. This could lead to unauthorized disclosure of confidential business discussions, intellectual property queries, or personal employee information. Law enforcement or corporate surveillance could exploit this to monitor queries related to regulated or sensitive subjects, impacting employee privacy and trust. The attack does not allow content reconstruction but can reveal thematic information, which may be sufficient for targeted monitoring or profiling. Organizations relying on cloud-based AI services are particularly vulnerable if their network traffic is accessible to attackers. This could affect sectors handling sensitive data such as healthcare, finance, legal, and government agencies. The attack’s feasibility depends on network access and the AI model used, with some models being more resistant. While the attack is not a direct system compromise, the confidentiality breach could have regulatory implications under GDPR and other privacy laws, leading to reputational and legal consequences.
Mitigation Recommendations
European organizations should prioritize using AI providers that have implemented countermeasures such as packet padding and randomized response timing to mitigate Whisper Leak. Where possible, deploy local AI models on-premises to eliminate network exposure. Configure AI services to disable streaming output, ensuring responses are delivered in a single batch rather than token-by-token, thereby reducing timing side-channels. Network segmentation and strict access controls should be enforced to prevent unauthorized monitoring of traffic to AI servers, especially at ISP or organizational gateway levels. Employ robust VPN solutions with strong encryption and minimal traffic pattern leakage to obscure packet timing and size. Regularly audit AI usage policies and educate employees on avoiding sensitive queries over untrusted networks. Monitor network traffic for anomalous sniffing or interception activities. Collaborate with AI providers to stay informed about updates addressing this vulnerability. Finally, maintain endpoint security to prevent spyware or malware that could leak chat content directly from user devices, as this remains the primary risk vector.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Italy, Spain, Poland
Protecting LLM chats from the eavesdropping Whisper Leak attack | Kaspersky official blog
Description
How an attacker can find out the topic of your chats with an AI assistant without hacking your computer, and what you can do to guard against this threat
AI-Powered Analysis
Technical Analysis
The Whisper Leak attack is a side-channel vulnerability targeting AI chat confidentiality by analyzing encrypted network traffic patterns rather than decrypting data. Large language models generate responses token-by-token in a streaming fashion, causing distinct timing and packet size patterns correlated with the semantic content of the conversation. Researchers at Microsoft analyzed 30 AI models with nearly 12,000 prompts, demonstrating the ability to classify 'dangerous' topics such as illegal activities with high accuracy (71%-100%) by measuring server response delays, packet sizes, and counts. In more realistic datasets, detection rates varied by model but remained significant, with some models allowing 50% detection success with zero false positives. The attack requires an adversary to have access to network traffic between the user and AI servers, such as control over ISP infrastructure or internal organizational networks. However, the attacker must train detection algorithms on specific topics, limiting the breadth of feasible surveillance. Providers have responded by adding packet padding and altering server response algorithms to obfuscate timing patterns, reducing attack efficacy. Users can further mitigate risk by using local AI models, disabling streaming output to send responses in bulk, avoiding sensitive topics on untrusted networks, and using VPNs. The attack does not compromise the AI model or user devices directly but threatens confidentiality by revealing conversation topics through traffic analysis.
Potential Impact
For European organizations, the Whisper Leak attack poses a privacy risk by potentially exposing sensitive AI chat topics to network-level adversaries without breaching encryption. This could lead to unauthorized disclosure of confidential business discussions, intellectual property queries, or personal employee information. Law enforcement or corporate surveillance could exploit this to monitor queries related to regulated or sensitive subjects, impacting employee privacy and trust. The attack does not allow content reconstruction but can reveal thematic information, which may be sufficient for targeted monitoring or profiling. Organizations relying on cloud-based AI services are particularly vulnerable if their network traffic is accessible to attackers. This could affect sectors handling sensitive data such as healthcare, finance, legal, and government agencies. The attack’s feasibility depends on network access and the AI model used, with some models being more resistant. While the attack is not a direct system compromise, the confidentiality breach could have regulatory implications under GDPR and other privacy laws, leading to reputational and legal consequences.
Mitigation Recommendations
European organizations should prioritize using AI providers that have implemented countermeasures such as packet padding and randomized response timing to mitigate Whisper Leak. Where possible, deploy local AI models on-premises to eliminate network exposure. Configure AI services to disable streaming output, ensuring responses are delivered in a single batch rather than token-by-token, thereby reducing timing side-channels. Network segmentation and strict access controls should be enforced to prevent unauthorized monitoring of traffic to AI servers, especially at ISP or organizational gateway levels. Employ robust VPN solutions with strong encryption and minimal traffic pattern leakage to obscure packet timing and size. Regularly audit AI usage policies and educate employees on avoiding sensitive queries over untrusted networks. Monitor network traffic for anomalous sniffing or interception activities. Collaborate with AI providers to stay informed about updates addressing this vulnerability. Finally, maintain endpoint security to prevent spyware or malware that could leak chat content directly from user devices, as this remains the primary risk vector.
Affected Countries
Technical Details
- Article Source
- {"url":"https://www.kaspersky.com/blog/chatbot-eavesdropping-whisper-leak-protection/54905/","fetched":true,"fetchedAt":"2025-12-04T10:50:25.595Z","wordCount":1469}
Threat ID: 6931677103f8574ee0ebfa59
Added to database: 12/4/2025, 10:50:25 AM
Last enriched: 12/19/2025, 5:53:44 AM
Last updated: 1/18/2026, 3:07:05 PM
Views: 170
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
CVE-2026-1122: SQL Injection in Yonyou KSOA
MediumCVE-2026-1121: SQL Injection in Yonyou KSOA
MediumCVE-2026-1120: SQL Injection in Yonyou KSOA
MediumCVE-2026-1119: SQL Injection in itsourcecode Society Management System
MediumCVE-2026-1118: SQL Injection in itsourcecode Society Management System
MediumActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need more coverage?
Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.