Hundreds of Malicious Crypto Trading Add-Ons Found in Moltbot/OpenClaw
Nearly 400 malicious crypto trading add-ons, termed 'skills', have been discovered within the Moltbot/OpenClaw AI assistant ecosystem. These fake add-ons impersonate legitimate cryptocurrency trading automation tools but are designed to deploy information-stealing malware. The attack leverages social engineering to trick users into executing commands that exfiltrate crypto assets. The supply chain attack exploits the lack of security review in the skills publication process, allowing widespread distribution of these malicious components. The shared command-and-control infrastructure coordinates these skills, amplifying their threat. Endpoint-native AI agents like Moltbot/OpenClaw present inherent security risks if not architected with robust controls. Although no known exploits are reported in the wild yet, the potential for significant financial loss and data compromise is high. This threat particularly targets users engaged in cryptocurrency trading across multiple platforms. European organizations involved in crypto trading or using AI assistants should be vigilant and implement strict security measures to mitigate this risk.
AI Analysis
Technical Summary
The threat involves a large-scale supply chain attack targeting the Moltbot/OpenClaw AI assistant project, where almost 400 malicious add-ons (skills) masquerade as legitimate cryptocurrency trading automation tools. These skills are designed to deploy information-stealing malware that targets users’ crypto assets. The attack exploits the skills publication process, which lacks adequate security review, enabling attackers to distribute malicious code widely. The malicious skills share a common command-and-control (C2) infrastructure, facilitating coordinated data exfiltration and remote control. Social engineering tactics are heavily employed to convince users to execute commands that compromise their wallets and sensitive information. The attack techniques align with MITRE ATT&CK tactics such as T1071 (Application Layer Protocol), T1195 (Supply Chain Compromise), T1555 (Credentials from Password Stores), T1219 (Remote Access Software), T1552 (Unsecured Credentials), T1204 (User Execution), T1199 (Trusted Relationship), and T1056 (Input Capture). The threat highlights the risks of endpoint-native AI agents, which if not designed with security in mind, can become vectors for malware delivery and data theft. No CVE or known exploits in the wild have been reported yet, but the scale and sophistication suggest a medium severity threat with potential for significant financial and reputational damage.
Potential Impact
European organizations involved in cryptocurrency trading or utilizing AI assistants like Moltbot/OpenClaw face risks including theft of crypto assets, exposure of sensitive credentials, and potential compromise of endpoint systems. Financial losses could be substantial due to direct theft or fraud. The shared C2 infrastructure means that once infected, multiple systems can be controlled remotely, increasing the attack surface and potential for lateral movement. The social engineering aspect increases the likelihood of successful exploitation, especially among users unfamiliar with AI assistant security risks. This threat could undermine trust in AI-driven trading tools and impact regulatory compliance related to data protection and financial security. Additionally, organizations may face operational disruptions if endpoint systems are compromised or if remediation efforts require system downtime. The lack of security review in the add-on publication process indicates a systemic vulnerability that could be exploited repeatedly, affecting the broader European crypto ecosystem.
Mitigation Recommendations
1. Implement strict vetting and security review processes for all AI assistant add-ons or skills before installation, including automated and manual code analysis. 2. Educate users on the risks of social engineering and the importance of verifying the authenticity of crypto trading tools and commands executed via AI assistants. 3. Employ endpoint detection and response (EDR) solutions capable of monitoring unusual command executions and network communications to known malicious C2 IPs such as 91.92.242.30. 4. Restrict AI assistant permissions to the minimum necessary and disable execution of untrusted or unsigned skills. 5. Use multi-factor authentication and hardware wallets for cryptocurrency transactions to reduce the impact of credential theft. 6. Monitor network traffic for signs of data exfiltration or communication with suspicious infrastructure. 7. Collaborate with AI assistant vendors to improve the security architecture, including sandboxing skills and enforcing strict publication policies. 8. Maintain up-to-date threat intelligence feeds and integrate indicators of compromise (IOCs) into security monitoring tools. 9. Conduct regular security awareness training focused on emerging AI-related threats and supply chain risks. 10. Establish incident response plans specifically addressing AI assistant compromise scenarios.
Affected Countries
Germany, United Kingdom, France, Netherlands, Switzerland, Sweden
Indicators of Compromise
- ip: 91.92.242.30
Hundreds of Malicious Crypto Trading Add-Ons Found in Moltbot/OpenClaw
Description
Nearly 400 malicious crypto trading add-ons, termed 'skills', have been discovered within the Moltbot/OpenClaw AI assistant ecosystem. These fake add-ons impersonate legitimate cryptocurrency trading automation tools but are designed to deploy information-stealing malware. The attack leverages social engineering to trick users into executing commands that exfiltrate crypto assets. The supply chain attack exploits the lack of security review in the skills publication process, allowing widespread distribution of these malicious components. The shared command-and-control infrastructure coordinates these skills, amplifying their threat. Endpoint-native AI agents like Moltbot/OpenClaw present inherent security risks if not architected with robust controls. Although no known exploits are reported in the wild yet, the potential for significant financial loss and data compromise is high. This threat particularly targets users engaged in cryptocurrency trading across multiple platforms. European organizations involved in crypto trading or using AI assistants should be vigilant and implement strict security measures to mitigate this risk.
AI-Powered Analysis
Technical Analysis
The threat involves a large-scale supply chain attack targeting the Moltbot/OpenClaw AI assistant project, where almost 400 malicious add-ons (skills) masquerade as legitimate cryptocurrency trading automation tools. These skills are designed to deploy information-stealing malware that targets users’ crypto assets. The attack exploits the skills publication process, which lacks adequate security review, enabling attackers to distribute malicious code widely. The malicious skills share a common command-and-control (C2) infrastructure, facilitating coordinated data exfiltration and remote control. Social engineering tactics are heavily employed to convince users to execute commands that compromise their wallets and sensitive information. The attack techniques align with MITRE ATT&CK tactics such as T1071 (Application Layer Protocol), T1195 (Supply Chain Compromise), T1555 (Credentials from Password Stores), T1219 (Remote Access Software), T1552 (Unsecured Credentials), T1204 (User Execution), T1199 (Trusted Relationship), and T1056 (Input Capture). The threat highlights the risks of endpoint-native AI agents, which if not designed with security in mind, can become vectors for malware delivery and data theft. No CVE or known exploits in the wild have been reported yet, but the scale and sophistication suggest a medium severity threat with potential for significant financial and reputational damage.
Potential Impact
European organizations involved in cryptocurrency trading or utilizing AI assistants like Moltbot/OpenClaw face risks including theft of crypto assets, exposure of sensitive credentials, and potential compromise of endpoint systems. Financial losses could be substantial due to direct theft or fraud. The shared C2 infrastructure means that once infected, multiple systems can be controlled remotely, increasing the attack surface and potential for lateral movement. The social engineering aspect increases the likelihood of successful exploitation, especially among users unfamiliar with AI assistant security risks. This threat could undermine trust in AI-driven trading tools and impact regulatory compliance related to data protection and financial security. Additionally, organizations may face operational disruptions if endpoint systems are compromised or if remediation efforts require system downtime. The lack of security review in the add-on publication process indicates a systemic vulnerability that could be exploited repeatedly, affecting the broader European crypto ecosystem.
Mitigation Recommendations
1. Implement strict vetting and security review processes for all AI assistant add-ons or skills before installation, including automated and manual code analysis. 2. Educate users on the risks of social engineering and the importance of verifying the authenticity of crypto trading tools and commands executed via AI assistants. 3. Employ endpoint detection and response (EDR) solutions capable of monitoring unusual command executions and network communications to known malicious C2 IPs such as 91.92.242.30. 4. Restrict AI assistant permissions to the minimum necessary and disable execution of untrusted or unsigned skills. 5. Use multi-factor authentication and hardware wallets for cryptocurrency transactions to reduce the impact of credential theft. 6. Monitor network traffic for signs of data exfiltration or communication with suspicious infrastructure. 7. Collaborate with AI assistant vendors to improve the security architecture, including sandboxing skills and enforcing strict publication policies. 8. Maintain up-to-date threat intelligence feeds and integrate indicators of compromise (IOCs) into security monitoring tools. 9. Conduct regular security awareness training focused on emerging AI-related threats and supply chain risks. 10. Establish incident response plans specifically addressing AI assistant compromise scenarios.
Affected Countries
Technical Details
- Author
- AlienVault
- Tlp
- white
- References
- ["https://www.infosecurity-magazine.com/news/malicious-crypto-trading-skills"]
- Adversary
- null
- Pulse Id
- 698329ed1fc44cc3c16e1038
- Threat Score
- null
Indicators of Compromise
Ip
| Value | Description | Copy |
|---|---|---|
ip91.92.242.30 | — |
Threat ID: 69847bd0f9fa50a62f1a71f2
Added to database: 2/5/2026, 11:15:28 AM
Last enriched: 2/5/2026, 11:29:46 AM
Last updated: 2/5/2026, 7:55:59 PM
Views: 8
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
Anatomy of a Russian Crypto Drainer Operation
MediumAI-assisted cloud intrusion achieves admin access in 8 minutes
MediumNew year, new sector: Targeting India's startup ecosystem
MediumCompromised Routers, DNS, and a TDS Hidden in Aeza Networks
MediumPunishing Owl Attacks Russia: A New Owl in the Hacktivists' Forest
MediumActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need more coverage?
Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.