Skip to main content

CVE-2025-46724: CWE-94: Improper Control of Generation of Code ('Code Injection') in langroid langroid

Critical
VulnerabilityCVE-2025-46724cvecve-2025-46724cwe-94
Published: Tue May 20 2025 (05/20/2025, 17:22:13 UTC)
Source: CVE
Vendor/Project: langroid
Product: langroid

Description

Langroid is a Python framework to build large language model (LLM)-powered applications. Prior to version 0.53.15, `TableChatAgent` uses `pandas eval()`. If fed by untrusted user input, like the case of a public-facing LLM application, it may be vulnerable to code injection. Langroid 0.53.15 sanitizes input to `TableChatAgent` by default to tackle the most common attack vectors, and added several warnings about the risky behavior in the project documentation.

AI-Powered Analysis

AILast updated: 07/04/2025, 08:41:02 UTC

Technical Analysis

CVE-2025-46724 is a critical security vulnerability classified under CWE-94 (Improper Control of Generation of Code, or Code Injection) affecting the langroid Python framework, specifically versions prior to 0.53.15. Langroid is designed to facilitate the development of applications powered by large language models (LLMs). The vulnerability arises from the use of the pandas library's eval() function within the TableChatAgent component. This function evaluates string expressions as Python code, which, if fed with untrusted user input, can lead to arbitrary code execution. In scenarios where langroid is deployed in public-facing LLM applications, an attacker could exploit this flaw by injecting malicious code through crafted inputs to the TableChatAgent, resulting in full compromise of the host system. The vulnerability has a CVSS v3.1 base score of 9.8, indicating critical severity, with an attack vector of network (AV:N), low attack complexity (AC:L), no privileges required (PR:N), no user interaction needed (UI:N), and impacts on confidentiality, integrity, and availability (C:H/I:H/A:H). The issue was addressed in langroid version 0.53.15 by sanitizing inputs to TableChatAgent and adding warnings in the documentation about the risks of unsafe usage. No known exploits are currently reported in the wild, but the high severity and ease of exploitation make it a significant threat if unpatched. This vulnerability highlights the risks of using dynamic code evaluation functions like pandas eval() without strict input validation in security-sensitive contexts such as public-facing AI applications.

Potential Impact

For European organizations leveraging langroid in their AI or LLM-powered applications, this vulnerability poses a severe risk. Exploitation could lead to full system compromise, data breaches, and service disruption. Confidentiality is at risk as attackers could exfiltrate sensitive data processed or stored by the application. Integrity is compromised since attackers can execute arbitrary code, potentially altering data or application behavior. Availability is also threatened due to possible denial-of-service conditions or system crashes caused by malicious payloads. Given the increasing adoption of AI frameworks in sectors like finance, healthcare, and public services across Europe, exploitation could have cascading effects including regulatory non-compliance (e.g., GDPR violations), reputational damage, and financial losses. The lack of required authentication and user interaction means attackers can remotely exploit vulnerable instances without prior access, increasing the attack surface. Organizations running public-facing LLM applications or integrating langroid in their AI pipelines must consider this vulnerability critical and prioritize remediation to prevent potential large-scale impacts.

Mitigation Recommendations

1. Immediate upgrade: Organizations should upgrade langroid to version 0.53.15 or later, where input sanitization for TableChatAgent is implemented. 2. Input validation: Implement strict input validation and sanitization on all user inputs before they reach the eval() function or any dynamic code execution context. 3. Application isolation: Run langroid applications within isolated environments or containers with minimal privileges to limit the impact of potential exploitation. 4. Monitoring and logging: Enable detailed logging and monitoring of application behavior to detect anomalous inputs or execution patterns indicative of exploitation attempts. 5. Code review and testing: Conduct thorough security code reviews focusing on dynamic code evaluation usage and perform fuzz testing to identify injection vectors. 6. Disable or restrict eval(): Where feasible, refactor application logic to avoid using pandas eval() or replace it with safer alternatives that do not execute arbitrary code. 7. Documentation and training: Educate developers and operators about the risks of code injection and safe coding practices when handling user input in AI frameworks. 8. Network controls: Employ network-level protections such as web application firewalls (WAFs) to detect and block malicious payloads targeting the vulnerable endpoint.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2025-04-28T20:56:09.084Z
Cisa Enriched
true
Cvss Version
3.1
State
PUBLISHED

Threat ID: 682cd0f71484d88663aeacaa

Added to database: 5/20/2025, 6:59:03 PM

Last enriched: 7/4/2025, 8:41:02 AM

Last updated: 8/18/2025, 6:24:18 AM

Views: 48

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats