Skip to main content

CVE-2025-46725: CWE-94: Improper Control of Generation of Code ('Code Injection') in langroid langroid

High
VulnerabilityCVE-2025-46725cvecve-2025-46725cwe-94
Published: Tue May 20 2025 (05/20/2025, 17:24:31 UTC)
Source: CVE
Vendor/Project: langroid
Product: langroid

Description

Langroid is a Python framework to build large language model (LLM)-powered applications. Prior to version 0.53.15, `LanceDocChatAgent` uses pandas eval() through `compute_from_docs()`. As a result, an attacker may be able to make the agent run malicious commands through `QueryPlan.dataframe_calc]`) compromising the host system. Langroid 0.53.15 sanitizes input to the affected function by default to tackle the most common attack vectors, and added several warnings about the risky behavior in the project documentation.

AI-Powered Analysis

AILast updated: 07/04/2025, 08:41:17 UTC

Technical Analysis

CVE-2025-46725 is a high-severity code injection vulnerability affecting versions of the langroid Python framework prior to 0.53.15. Langroid is designed to facilitate building large language model (LLM)-powered applications. The vulnerability arises from the use of pandas' eval() function within the LanceDocChatAgent component, specifically in the compute_from_docs() method. This method evaluates expressions on dataframes using pandas eval(), which can execute arbitrary code if the input is not properly sanitized. An attacker can exploit this by crafting malicious input to the QueryPlan.dataframe_calc parameter, causing the agent to execute arbitrary commands on the host system. This leads to potential full system compromise, including unauthorized code execution, data theft, or disruption of services. The vulnerability is classified under CWE-94 (Improper Control of Generation of Code), indicating that the application fails to properly control or sanitize dynamically generated code. Langroid version 0.53.15 addresses this issue by sanitizing inputs to the affected function by default and adding warnings in the documentation about the risks of using eval(). No known exploits are currently reported in the wild, but the vulnerability's nature and high CVSS score (8.1) indicate a significant risk if left unpatched. The CVSS vector indicates that the vulnerability is remotely exploitable without authentication or user interaction, with high impact on confidentiality, integrity, and availability, and no scope change.

Potential Impact

For European organizations leveraging langroid framework versions prior to 0.53.15 in their LLM-powered applications, this vulnerability poses a severe risk. Successful exploitation could allow attackers to execute arbitrary code remotely, leading to full system compromise. This could result in data breaches, intellectual property theft, disruption of critical AI services, and potential lateral movement within enterprise networks. Given the increasing adoption of AI and LLM-based solutions in sectors such as finance, healthcare, and government across Europe, the impact could be substantial. Compromise of AI application infrastructure could undermine trust in AI-driven decision-making processes and expose sensitive personal or corporate data. Additionally, disruption or manipulation of AI outputs could have downstream effects on automated workflows and customer-facing services. The lack of required authentication and user interaction makes this vulnerability particularly dangerous in exposed environments, increasing the likelihood of exploitation if vulnerable versions are accessible over the network.

Mitigation Recommendations

European organizations should immediately audit their use of the langroid framework and identify any deployments running versions prior to 0.53.15. Upgrading to langroid 0.53.15 or later is the primary and most effective mitigation step, as it includes input sanitization and security warnings. If immediate upgrade is not feasible, organizations should implement strict input validation and sanitization on any user-controlled data passed to the LanceDocChatAgent, particularly the QueryPlan.dataframe_calc parameter. Network-level protections such as firewall rules and segmentation should restrict access to systems running vulnerable langroid versions, limiting exposure to untrusted networks. Monitoring and logging of unusual eval() usage or unexpected command execution attempts within the application can help detect exploitation attempts. Additionally, organizations should review and harden the security posture of hosts running langroid, including applying OS-level security controls, restricting execution privileges, and employing endpoint detection and response (EDR) solutions to detect anomalous behaviors. Finally, educating developers and operators about the risks of dynamic code execution and the importance of secure coding practices in AI frameworks is recommended to prevent similar issues.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2025-04-28T20:56:09.084Z
Cisa Enriched
true
Cvss Version
4.0
State
PUBLISHED

Threat ID: 682cd0f71484d88663aeacac

Added to database: 5/20/2025, 6:59:03 PM

Last enriched: 7/4/2025, 8:41:17 AM

Last updated: 8/15/2025, 7:09:21 PM

Views: 23

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats