New OpenClaw AI agent found unsafe for use | Kaspersky official blog
We explore whether OpenClaw can be safely installed and configured, and the risks involved in running this experiment.
AI Analysis
Technical Summary
OpenClaw, formerly known as Clawdbot and Moltbot, is an open-source AI agent that automates tasks by integrating with messaging apps and APIs, running primarily on Apple hardware but also on others. It gained rapid popularity due to its ability to self-learn and execute user-defined automations stored in local Markdown files. However, security researchers have uncovered a large number of vulnerabilities—512 in total, including eight critical ones—that expose users to significant risks. The agent lacks any built-in authentication by default, trusting all connections from localhost (127.0.0.1). When deployed behind improperly configured reverse proxies, external attackers can masquerade as local connections and gain full administrative access, including remote code execution capabilities. Prompt injection attacks exploit the AI’s language model by embedding malicious commands in emails or documents, causing it to leak private keys, API tokens, chat histories, and other sensitive data. The unmoderated OpenClaw skills catalog has become a vector for malware distribution, with over 230 malicious plugins identified that use social engineering to trick users into installing stealers disguised as legitimate utilities. These stealers exfiltrate browser passwords, crypto wallet data, cloud credentials, and more. The agent’s requirement for full OS and command line access further amplifies the risk, as misconfiguration can lead to system bricking or data compromise. Despite its appeal to hobbyists and tech enthusiasts, OpenClaw’s security posture is currently inadequate for safe use, especially on primary or work devices. The agent also consumes large amounts of AI tokens, increasing operational costs. Kaspersky recommends running OpenClaw only on isolated, dedicated hardware or VPS instances, using strict network allowlists, burner accounts for connected services, and regular deep security audits. The AI model Claude Opus 4.5 is preferred for better prompt injection resistance. Overall, OpenClaw exemplifies the challenges of securing autonomous AI agents in open-source environments.
Potential Impact
For European organizations, the OpenClaw vulnerabilities pose significant risks, particularly for those integrating AI automation into their IT environments or using Apple hardware extensively. The ability for attackers to remotely execute code with full system privileges can lead to complete system compromise, data theft, and operational disruption. Sensitive corporate data, private keys, API tokens, and communication histories can be exfiltrated, potentially leading to intellectual property loss, financial fraud, and reputational damage. The unmoderated plugin ecosystem increases the risk of supply chain attacks, where malicious code is distributed under the guise of legitimate functionality. Organizations experimenting with AI agents without proper isolation risk exposing critical infrastructure and user data. The high resource consumption of OpenClaw may also impact operational costs and system performance. Given the agent’s design flaws, any deployment in production or on devices connected to corporate networks could facilitate lateral movement and persistent threats. The lack of authentication and reliance on default trust assumptions make even less sophisticated attackers capable of exploiting these vulnerabilities. Prompt injection attacks further complicate defense, as they exploit inherent weaknesses in large language models, potentially bypassing traditional security controls. Overall, the threat undermines trust in AI automation tools and highlights the need for rigorous security practices in AI deployments.
Mitigation Recommendations
1. Deploy OpenClaw only on isolated, dedicated hardware or virtual private servers (VPS) that are not connected to critical networks or primary user devices. 2. Implement strict network segmentation and firewall rules using an allowlist approach to restrict all inbound and outbound traffic to only necessary ports and IP addresses. 3. Avoid exposing OpenClaw administrative interfaces to the public internet; if reverse proxies are used, ensure they are correctly configured to prevent forwarding external requests as localhost traffic. 4. Use burner or isolated accounts for all messaging apps and services connected to OpenClaw to limit the impact of credential theft. 5. Regularly perform deep security audits using OpenClaw’s built-in audit tools and third-party vulnerability scanners to detect misconfigurations and vulnerabilities. 6. Prefer AI models with better prompt injection resistance, such as Claude Opus 4.5, to reduce the risk of malicious prompt exploitation. 7. Avoid installing unverified or unmoderated skills/plugins from public repositories; only use vetted and signed extensions. 8. Educate users about the risks of social engineering and the ClickFix technique to prevent inadvertent installation of malicious code. 9. Monitor network traffic and system logs for unusual activity indicative of exploitation attempts or data exfiltration. 10. Keep abreast of updates from the OpenClaw community and security researchers, applying patches and configuration improvements promptly once available. 11. Consider alternative AI automation solutions with stronger security postures for production environments.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Denmark, Norway, Switzerland, Austria, Belgium, Italy, Spain
New OpenClaw AI agent found unsafe for use | Kaspersky official blog
Description
We explore whether OpenClaw can be safely installed and configured, and the risks involved in running this experiment.
AI-Powered Analysis
Technical Analysis
OpenClaw, formerly known as Clawdbot and Moltbot, is an open-source AI agent that automates tasks by integrating with messaging apps and APIs, running primarily on Apple hardware but also on others. It gained rapid popularity due to its ability to self-learn and execute user-defined automations stored in local Markdown files. However, security researchers have uncovered a large number of vulnerabilities—512 in total, including eight critical ones—that expose users to significant risks. The agent lacks any built-in authentication by default, trusting all connections from localhost (127.0.0.1). When deployed behind improperly configured reverse proxies, external attackers can masquerade as local connections and gain full administrative access, including remote code execution capabilities. Prompt injection attacks exploit the AI’s language model by embedding malicious commands in emails or documents, causing it to leak private keys, API tokens, chat histories, and other sensitive data. The unmoderated OpenClaw skills catalog has become a vector for malware distribution, with over 230 malicious plugins identified that use social engineering to trick users into installing stealers disguised as legitimate utilities. These stealers exfiltrate browser passwords, crypto wallet data, cloud credentials, and more. The agent’s requirement for full OS and command line access further amplifies the risk, as misconfiguration can lead to system bricking or data compromise. Despite its appeal to hobbyists and tech enthusiasts, OpenClaw’s security posture is currently inadequate for safe use, especially on primary or work devices. The agent also consumes large amounts of AI tokens, increasing operational costs. Kaspersky recommends running OpenClaw only on isolated, dedicated hardware or VPS instances, using strict network allowlists, burner accounts for connected services, and regular deep security audits. The AI model Claude Opus 4.5 is preferred for better prompt injection resistance. Overall, OpenClaw exemplifies the challenges of securing autonomous AI agents in open-source environments.
Potential Impact
For European organizations, the OpenClaw vulnerabilities pose significant risks, particularly for those integrating AI automation into their IT environments or using Apple hardware extensively. The ability for attackers to remotely execute code with full system privileges can lead to complete system compromise, data theft, and operational disruption. Sensitive corporate data, private keys, API tokens, and communication histories can be exfiltrated, potentially leading to intellectual property loss, financial fraud, and reputational damage. The unmoderated plugin ecosystem increases the risk of supply chain attacks, where malicious code is distributed under the guise of legitimate functionality. Organizations experimenting with AI agents without proper isolation risk exposing critical infrastructure and user data. The high resource consumption of OpenClaw may also impact operational costs and system performance. Given the agent’s design flaws, any deployment in production or on devices connected to corporate networks could facilitate lateral movement and persistent threats. The lack of authentication and reliance on default trust assumptions make even less sophisticated attackers capable of exploiting these vulnerabilities. Prompt injection attacks further complicate defense, as they exploit inherent weaknesses in large language models, potentially bypassing traditional security controls. Overall, the threat undermines trust in AI automation tools and highlights the need for rigorous security practices in AI deployments.
Mitigation Recommendations
1. Deploy OpenClaw only on isolated, dedicated hardware or virtual private servers (VPS) that are not connected to critical networks or primary user devices. 2. Implement strict network segmentation and firewall rules using an allowlist approach to restrict all inbound and outbound traffic to only necessary ports and IP addresses. 3. Avoid exposing OpenClaw administrative interfaces to the public internet; if reverse proxies are used, ensure they are correctly configured to prevent forwarding external requests as localhost traffic. 4. Use burner or isolated accounts for all messaging apps and services connected to OpenClaw to limit the impact of credential theft. 5. Regularly perform deep security audits using OpenClaw’s built-in audit tools and third-party vulnerability scanners to detect misconfigurations and vulnerabilities. 6. Prefer AI models with better prompt injection resistance, such as Claude Opus 4.5, to reduce the risk of malicious prompt exploitation. 7. Avoid installing unverified or unmoderated skills/plugins from public repositories; only use vetted and signed extensions. 8. Educate users about the risks of social engineering and the ClickFix technique to prevent inadvertent installation of malicious code. 9. Monitor network traffic and system logs for unusual activity indicative of exploitation attempts or data exfiltration. 10. Keep abreast of updates from the OpenClaw community and security researchers, applying patches and configuration improvements promptly once available. 11. Consider alternative AI automation solutions with stronger security postures for production environments.
Technical Details
- Article Source
- {"url":"https://www.kaspersky.com/blog/openclaw-vulnerabilities-exposed/55263/","fetched":true,"fetchedAt":"2026-02-10T15:02:32.412Z","wordCount":1917}
Threat ID: 698b48884b57a58fa115db38
Added to database: 2/10/2026, 3:02:32 PM
Last enriched: 2/10/2026, 3:03:17 PM
Last updated: 2/20/2026, 11:37:37 PM
Views: 495
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
CVE-2026-27026: CWE-770: Allocation of Resources Without Limits or Throttling in py-pdf pypdf
MediumCVE-2026-27025: CWE-834: Excessive Iteration in py-pdf pypdf
MediumCVE-2026-27024: CWE-835: Loop with Unreachable Exit Condition ('Infinite Loop') in py-pdf pypdf
MediumCVE-2026-27022: CWE-74: Improper Neutralization of Special Elements in Output Used by a Downstream Component ('Injection') in langchain-ai langgraphjs
MediumCVE-2026-2490: CWE-59: Improper Link Resolution Before File Access ('Link Following') in RustDesk Client for Windows
MediumActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need more coverage?
Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.