Malicious AI Assistant Extensions Harvest LLM Chat Histories
An investigation has uncovered malicious Chromium-based browser extensions masquerading as legitimate AI assistant tools to collect Large Language Model (LLM) chat histories and browsing data. These extensions have been installed approximately 900,000 times, affecting over 20,000 enterprise tenants. The malicious extensions collect full URLs and AI chat content from platforms like ChatGPT and DeepSeek, potentially exposing organizations to leaks of confidential information. The attack chain involves reconnaissance, weaponization, delivery through trusted app stores, exploitation of user trust, installation for persistence, and regular data exfiltration to attacker-controlled infrastructure. This activity transforms a seemingly benign productivity tool into a persistent data collection mechanism embedded in daily enterprise browser usage.
AI Analysis
Technical Summary
This threat involves malicious browser extensions targeting Chromium-based browsers, masquerading as legitimate AI assistant tools to harvest sensitive data. These extensions have been installed nearly 900,000 times, affecting over 20,000 enterprise tenants globally. The extensions collect comprehensive browsing data, including full URLs visited and chat histories from AI platforms such as ChatGPT and DeepSeek. The attack lifecycle begins with reconnaissance to identify targets, followed by weaponization of the extensions to appear legitimate. Delivery occurs through trusted app stores, leveraging user trust to facilitate installation. Once installed, the extensions maintain persistence within the browser environment and continuously exfiltrate collected data to attacker-controlled infrastructure. This covert data collection transforms a seemingly benign productivity enhancement into a persistent espionage tool embedded in daily enterprise browser usage. The threat leverages multiple MITRE ATT&CK techniques, including data from local system sources, command execution, persistence mechanisms, data staging, and exfiltration over command and control channels. Indicators of compromise include domains such as chatgptsidebar.pro, chatsaigpt.com, deepaichats.com, and chataigpt.pro. No CVE or known exploit in the wild is reported yet, but the scale of installations and enterprise impact is significant. The threat is classified as medium severity due to the potential confidentiality impact and ease of exploitation via social engineering and trusted delivery channels.
Potential Impact
The primary impact is the unauthorized disclosure of sensitive organizational information, including confidential AI chat content and detailed browsing histories. This can lead to intellectual property theft, exposure of strategic plans, and leakage of personally identifiable information (PII). Enterprises relying on AI assistants for sensitive tasks are particularly vulnerable, as the stolen chat histories may contain proprietary or confidential data. The persistence of these extensions means long-term data leakage without immediate detection. The widespread installation scale increases the risk of large-scale data breaches affecting thousands of organizations. Additionally, the presence of such malicious extensions undermines trust in browser extension ecosystems and AI productivity tools. The data exfiltration to attacker-controlled domains could facilitate further targeted attacks, espionage, or blackmail. Operational disruption is possible if organizations respond by disabling extensions or restricting browser functionality. Overall, the threat compromises confidentiality and integrity of enterprise data with moderate impact on availability.
Mitigation Recommendations
1. Implement strict extension management policies in enterprise environments, allowing installation only from verified publishers and trusted sources. 2. Employ browser security tools that can detect and block suspicious extensions and monitor extension permissions regularly. 3. Conduct regular audits of installed browser extensions across all enterprise endpoints to identify and remove unauthorized or suspicious extensions. 4. Monitor network traffic for connections to known malicious domains such as chatgptsidebar.pro, chatsaigpt.com, deepaichats.com, and chataigpt.pro, and block these at the firewall or proxy level. 5. Educate users about the risks of installing unverified AI assistant extensions and encourage verification of extension authenticity before installation. 6. Use endpoint detection and response (EDR) solutions to detect anomalous data exfiltration behaviors associated with browser extensions. 7. Collaborate with browser vendors to report malicious extensions and expedite their removal from app stores. 8. Limit the scope of sensitive data shared with AI assistants and avoid using browser extensions for critical or confidential workflows where possible. 9. Implement data loss prevention (DLP) solutions that can detect and prevent unauthorized transmission of sensitive data from endpoints. 10. Maintain up-to-date inventories of enterprise software and extensions to quickly respond to emerging threats.
Affected Countries
United States, United Kingdom, Germany, Japan, Canada, Australia, South Korea, France, Netherlands, Singapore
Indicators of Compromise
- domain: chatgptsidebar.pro
- domain: chatsaigpt.com
- domain: deepaichats.com
- domain: chataigpt.pro
Malicious AI Assistant Extensions Harvest LLM Chat Histories
Description
An investigation has uncovered malicious Chromium-based browser extensions masquerading as legitimate AI assistant tools to collect Large Language Model (LLM) chat histories and browsing data. These extensions have been installed approximately 900,000 times, affecting over 20,000 enterprise tenants. The malicious extensions collect full URLs and AI chat content from platforms like ChatGPT and DeepSeek, potentially exposing organizations to leaks of confidential information. The attack chain involves reconnaissance, weaponization, delivery through trusted app stores, exploitation of user trust, installation for persistence, and regular data exfiltration to attacker-controlled infrastructure. This activity transforms a seemingly benign productivity tool into a persistent data collection mechanism embedded in daily enterprise browser usage.
AI-Powered Analysis
Technical Analysis
This threat involves malicious browser extensions targeting Chromium-based browsers, masquerading as legitimate AI assistant tools to harvest sensitive data. These extensions have been installed nearly 900,000 times, affecting over 20,000 enterprise tenants globally. The extensions collect comprehensive browsing data, including full URLs visited and chat histories from AI platforms such as ChatGPT and DeepSeek. The attack lifecycle begins with reconnaissance to identify targets, followed by weaponization of the extensions to appear legitimate. Delivery occurs through trusted app stores, leveraging user trust to facilitate installation. Once installed, the extensions maintain persistence within the browser environment and continuously exfiltrate collected data to attacker-controlled infrastructure. This covert data collection transforms a seemingly benign productivity enhancement into a persistent espionage tool embedded in daily enterprise browser usage. The threat leverages multiple MITRE ATT&CK techniques, including data from local system sources, command execution, persistence mechanisms, data staging, and exfiltration over command and control channels. Indicators of compromise include domains such as chatgptsidebar.pro, chatsaigpt.com, deepaichats.com, and chataigpt.pro. No CVE or known exploit in the wild is reported yet, but the scale of installations and enterprise impact is significant. The threat is classified as medium severity due to the potential confidentiality impact and ease of exploitation via social engineering and trusted delivery channels.
Potential Impact
The primary impact is the unauthorized disclosure of sensitive organizational information, including confidential AI chat content and detailed browsing histories. This can lead to intellectual property theft, exposure of strategic plans, and leakage of personally identifiable information (PII). Enterprises relying on AI assistants for sensitive tasks are particularly vulnerable, as the stolen chat histories may contain proprietary or confidential data. The persistence of these extensions means long-term data leakage without immediate detection. The widespread installation scale increases the risk of large-scale data breaches affecting thousands of organizations. Additionally, the presence of such malicious extensions undermines trust in browser extension ecosystems and AI productivity tools. The data exfiltration to attacker-controlled domains could facilitate further targeted attacks, espionage, or blackmail. Operational disruption is possible if organizations respond by disabling extensions or restricting browser functionality. Overall, the threat compromises confidentiality and integrity of enterprise data with moderate impact on availability.
Mitigation Recommendations
1. Implement strict extension management policies in enterprise environments, allowing installation only from verified publishers and trusted sources. 2. Employ browser security tools that can detect and block suspicious extensions and monitor extension permissions regularly. 3. Conduct regular audits of installed browser extensions across all enterprise endpoints to identify and remove unauthorized or suspicious extensions. 4. Monitor network traffic for connections to known malicious domains such as chatgptsidebar.pro, chatsaigpt.com, deepaichats.com, and chataigpt.pro, and block these at the firewall or proxy level. 5. Educate users about the risks of installing unverified AI assistant extensions and encourage verification of extension authenticity before installation. 6. Use endpoint detection and response (EDR) solutions to detect anomalous data exfiltration behaviors associated with browser extensions. 7. Collaborate with browser vendors to report malicious extensions and expedite their removal from app stores. 8. Limit the scope of sensitive data shared with AI assistants and avoid using browser extensions for critical or confidential workflows where possible. 9. Implement data loss prevention (DLP) solutions that can detect and prevent unauthorized transmission of sensitive data from endpoints. 10. Maintain up-to-date inventories of enterprise software and extensions to quickly respond to emerging threats.
Technical Details
- Author
- AlienVault
- Tlp
- white
- References
- ["https://www.microsoft.com/en-us/security/blog/2026/03/05/malicious-ai-assistant-extensions-harvest-llm-chat-histories/"]
- Adversary
- null
- Pulse Id
- 69a9e3fba46b38943d724458
- Threat Score
- null
Indicators of Compromise
Domain
| Value | Description | Copy |
|---|---|---|
domainchatgptsidebar.pro | — | |
domainchatsaigpt.com | — | |
domaindeepaichats.com | — | |
domainchataigpt.pro | — |
Threat ID: 69aabacfc48b3f10ff5537f3
Added to database: 3/6/2026, 11:30:23 AM
Last enriched: 3/6/2026, 11:45:30 AM
Last updated: 3/7/2026, 9:26:59 AM
Views: 17
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need more coverage?
Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.
Latest Threats
Check if your credentials are on the dark web
Instant breach scanning across billions of leaked records. Free tier available.