Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

Researchers Uncover 30+ Flaws in AI Coding Tools Enabling Data Theft and RCE Attacks

0
Critical
Vulnerabilityremoterce
Published: Sat Dec 06 2025 (12/06/2025, 15:24:00 UTC)
Source: The Hacker News

Description

Over 30 security vulnerabilities have been disclosed in various artificial intelligence (AI)-powered Integrated Development Environments (IDEs) that combine prompt injection primitives with legitimate features to achieve data exfiltration and remote code execution. The security shortcomings have been collectively named IDEsaster by security researcher Ari Marzouk (MaccariTA). They affect popular

AI-Powered Analysis

AILast updated: 12/06/2025, 16:52:58 UTC

Technical Analysis

The IDEsaster vulnerabilities represent a new class of security flaws discovered in AI-powered Integrated Development Environments (IDEs) and coding assistants that integrate large language models (LLMs) with development workflows. These vulnerabilities exploit prompt injection primitives—techniques that manipulate the AI model's input context—to hijack the AI agent's behavior. The attack chain involves three key steps: bypassing LLM guardrails to control the AI's context, leveraging AI agents' auto-approved tool calls to perform actions without user interaction, and triggering legitimate IDE features to break security boundaries. This enables attackers to exfiltrate sensitive data or execute arbitrary commands remotely. The affected products include widely used AI IDEs and extensions such as Cursor, Windsurf, Kiro.dev, GitHub Copilot, Zed.dev, Roo Code, Junie, and Cline. Twenty-four of these vulnerabilities have assigned CVEs, with others pending or addressed via warnings. Notable attack scenarios include reading sensitive files and leaking data via remote JSON schemas, editing IDE settings files to execute malicious code, and overriding workspace configurations to achieve persistent code execution. These attacks often rely on AI agents configured to auto-approve file writes, allowing exploitation without user interaction or workspace reopening. The vulnerabilities also stem from trusting Model Context Protocol (MCP) servers that can be poisoned or compromised, allowing attackers to inject malicious prompts indirectly. Additional related flaws include command injection in OpenAI Codex CLI and indirect prompt injections in Google Antigravity, enabling credential harvesting and persistent backdoors. The research highlights the expanded attack surface introduced by AI agents in development environments, emphasizing the need for a new security paradigm termed "Secure for AI" that incorporates AI-specific threat modeling, least privilege enforcement, sandboxing, and continuous security testing. The findings underscore risks to repositories using AI for issue triage, code suggestions, or automated replies, which are vulnerable to prompt injection, command injection, secret exfiltration, and supply chain compromise.

Potential Impact

For European organizations, the IDEsaster vulnerabilities pose a critical threat to the confidentiality, integrity, and availability of software development environments. Exploitation can lead to theft of proprietary source code, intellectual property, and sensitive configuration data, resulting in significant financial and reputational damage. Remote code execution capabilities allow attackers to implant persistent backdoors, manipulate build pipelines, or compromise supply chains, potentially affecting downstream customers and partners. The automated nature of these attacks—requiring no user interaction—makes detection and prevention challenging, increasing the risk of widespread compromise. Organizations relying heavily on AI-assisted coding tools for software development, especially in sectors such as finance, telecommunications, automotive, and critical infrastructure, face heightened exposure. Additionally, the use of compromised MCP servers or poisoned external inputs can lead to supply chain attacks that propagate through software repositories and CI/CD pipelines. The complexity and novelty of these attack vectors demand immediate attention to secure AI development tools and workflows to prevent data breaches, operational disruptions, and regulatory non-compliance under GDPR and other European data protection laws.

Mitigation Recommendations

European organizations should implement the following specific mitigations: 1) Restrict the use of AI-powered IDEs and agents to trusted projects and verified source files only, avoiding untrusted or external codebases that may contain malicious prompts. 2) Continuously monitor and validate Model Context Protocol (MCP) servers and external data sources for unauthorized changes or suspicious activity, employing integrity checks and anomaly detection. 3) Enforce the principle of least privilege on AI agents and their tool integrations, limiting auto-approved actions and disabling automatic file writes where possible. 4) Harden system prompts and input validation to minimize prompt injection vectors, including sanitizing user inputs, URLs, and embedded metadata for hidden or invisible malicious instructions. 5) Employ sandboxing techniques to isolate AI agent executions and command invocations, preventing unauthorized access to critical system resources. 6) Conduct rigorous security testing focused on path traversal, command injection, and information leakage specific to AI IDE features and extensions. 7) Educate developers and security teams about the risks of prompt injection and the importance of secure AI usage practices. 8) Collaborate with AI IDE vendors to ensure timely patching and adoption of secure-by-design principles aligned with the "Secure for AI" paradigm. 9) Review and restrict CI/CD pipeline integrations with AI agents to prevent exploitation via prompt injections in automated workflows. 10) Maintain comprehensive logging and monitoring of AI agent activities to detect anomalous behavior indicative of exploitation attempts.

Need more detailed analysis?Get Pro

Technical Details

Article Source
{"url":"https://thehackernews.com/2025/12/researchers-uncover-30-flaws-in-ai.html","fetched":true,"fetchedAt":"2025-12-06T16:52:40.647Z","wordCount":1802}

Threat ID: 69345f5b6c01a8c605b56f2b

Added to database: 12/6/2025, 4:52:43 PM

Last enriched: 12/6/2025, 4:52:58 PM

Last updated: 12/7/2025, 9:13:04 AM

Views: 75

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats