Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2024-5826: CWE-94 Improper Control of Generation of Code in vanna-ai vanna-ai/vanna

0
Critical
VulnerabilityCVE-2024-5826cvecve-2024-5826cwe-94
Published: Thu Jun 27 2024 (06/27/2024, 18:40:37 UTC)
Source: CVE Database V5
Vendor/Project: vanna-ai
Product: vanna-ai/vanna

Description

In the latest version of vanna-ai/vanna, the `vanna.ask` function is vulnerable to remote code execution due to prompt injection. The root cause is the lack of a sandbox when executing LLM-generated code, allowing an attacker to manipulate the code executed by the `exec` function in `src/vanna/base/base.py`. This vulnerability can be exploited by an attacker to achieve remote code execution on the app backend server, potentially gaining full control of the server.

AI-Powered Analysis

AILast updated: 10/15/2025, 13:44:47 UTC

Technical Analysis

CVE-2024-5826 is a critical vulnerability in the vanna-ai/vanna product, specifically within the `vanna.ask` function that executes code generated by a large language model (LLM). The root cause is the absence of a sandbox or any isolation mechanism when executing this code via Python's `exec` function located in `src/vanna/base/base.py`. This design flaw allows an attacker to craft malicious prompts that inject arbitrary code, which is then executed on the backend server. Because the execution context lacks restrictions, the attacker can gain full control over the server, potentially leading to data theft, service disruption, or further lateral movement within the network. The vulnerability is classified under CWE-94 (Improper Control of Generation of Code), highlighting the risks of executing dynamically generated code without proper validation or containment. The CVSS v3.0 score of 9.8 indicates a critical severity with network attack vector, no required privileges or user interaction, and full impact on confidentiality, integrity, and availability. Although no public exploits are currently known, the vulnerability's characteristics make it highly exploitable once discovered by attackers. The lack of specified affected versions suggests that all current versions of vanna-ai/vanna may be vulnerable until patched. This vulnerability underscores the risks of integrating AI-generated code execution without robust security controls.

Potential Impact

For European organizations, this vulnerability poses a severe risk, especially for those integrating vanna-ai/vanna into their AI workflows or backend systems. Successful exploitation can lead to complete server compromise, exposing sensitive data, intellectual property, and user information. It can also disrupt critical services, causing operational downtime and reputational damage. Given the critical nature of the flaw, attackers could leverage this vulnerability to establish persistent footholds, conduct espionage, or launch further attacks within the network. Organizations in sectors such as finance, healthcare, and government, which often deploy AI-driven solutions, are particularly vulnerable. The lack of authentication and user interaction requirements means that attackers can exploit this remotely and anonymously, increasing the threat surface. Additionally, the integration of AI tools in European digital transformation initiatives amplifies the potential impact. Failure to address this vulnerability promptly could result in regulatory penalties under GDPR if personal data is compromised.

Mitigation Recommendations

Immediate mitigation steps include disabling or restricting the use of the `vanna.ask` function until a secure patch is available. Organizations should implement sandboxing or containerization to isolate the execution environment of any LLM-generated code, preventing unauthorized system access. Input validation and sanitization must be enforced rigorously to detect and block prompt injection attempts. Employing allowlists for permissible code constructs or commands can reduce risk. Monitoring and logging of all code execution requests should be enhanced to detect anomalous behavior indicative of exploitation attempts. Network-level protections such as web application firewalls (WAFs) can be tuned to identify suspicious payloads targeting the vulnerable function. Organizations should also track vendor advisories for patches and apply them promptly. Conducting security reviews of AI integration points and educating developers on secure coding practices for dynamic code execution are recommended. Finally, implementing least privilege principles for the backend server environment limits the potential damage of a successful exploit.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
@huntr_ai
Date Reserved
2024-06-10T22:43:12.603Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 68ef9b2a178f764e1f470cf6

Added to database: 10/15/2025, 1:01:30 PM

Last enriched: 10/15/2025, 1:44:47 PM

Last updated: 10/16/2025, 11:13:23 AM

Views: 1

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats