CVE-2026-33873: CWE-94: Improper Control of Generation of Code ('Code Injection') in langflow-ai langflow
Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assistant feature in Langflow executes LLM-generated Python code during its validation phase. Although this phase appears intended to validate generated component code, the implementation reaches dynamic execution sinks and instantiates the generated class server-side. In deployments where an attacker can access the Agentic Assistant feature and influence the model output, this can result in arbitrary server-side Python execution. Version 1.9.0 fixes the issue.
AI Analysis
Technical Summary
Langflow is a platform designed for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assistant feature in langflow executes Python code generated by large language models (LLMs) during its validation phase. This validation phase is intended to verify generated component code but inadvertently reaches dynamic execution sinks, instantiating the generated Python classes server-side. Because the code executed is derived from LLM output, if an attacker can influence the model's output—such as by providing crafted inputs or manipulating the environment—they can inject arbitrary Python code that runs on the server. This represents a classic code injection vulnerability categorized under CWE-94 (Improper Control of Generation of Code). The vulnerability does not require user interaction and can be exploited remotely with low attack complexity, requiring only limited privileges to access the Agentic Assistant feature. The vulnerability has been assigned CVE-2026-33873 with a CVSS 4.0 base score of 9.3, reflecting its critical severity due to high impact on confidentiality, integrity, and availability, and the broad scope of affected systems. The issue was resolved in langflow version 1.9.0 by removing or securing the dynamic execution of LLM-generated code during validation. No public exploits have been observed yet, but the nature of the vulnerability makes it a high-risk target for attackers aiming to gain full control over affected servers.
Potential Impact
The impact of CVE-2026-33873 is severe for organizations using vulnerable versions of langflow. Successful exploitation allows attackers to execute arbitrary Python code on the server hosting langflow, potentially leading to full system compromise. This can result in unauthorized data access, data modification or destruction, deployment of malware or ransomware, lateral movement within networks, and disruption of AI workflows critical to business operations. Given langflow’s role in AI agent orchestration, compromise could also lead to manipulation of AI-driven decision-making processes, causing cascading operational and reputational damage. The vulnerability’s ease of exploitation and lack of required user interaction increase the risk of automated or targeted attacks. Organizations relying on langflow for AI deployments, especially those exposing the Agentic Assistant feature to untrusted users or networks, are at heightened risk. The absence of known exploits in the wild does not diminish the urgency, as proof-of-concept exploits could emerge rapidly given the public disclosure.
Mitigation Recommendations
To mitigate CVE-2026-33873, organizations should immediately upgrade langflow to version 1.9.0 or later, where the vulnerability is fixed. If upgrading is not immediately feasible, restrict access to the Agentic Assistant feature to trusted users and networks only, using network segmentation, firewalls, and strong authentication controls. Disable or remove the Agentic Assistant feature if it is not essential to operations. Implement runtime application self-protection (RASP) or web application firewalls (WAFs) with custom rules to detect and block suspicious Python code execution patterns. Conduct thorough code reviews and security testing of any custom workflows or AI components integrated with langflow. Monitor logs and system behavior for unusual activity indicative of exploitation attempts. Educate developers and administrators about the risks of dynamic code execution from untrusted sources and enforce strict input validation and sanitization practices. Maintain an incident response plan tailored to AI platform compromises to enable rapid containment and recovery.
Affected Countries
United States, Germany, United Kingdom, Canada, France, Japan, South Korea, Australia, Netherlands, Sweden
CVE-2026-33873: CWE-94: Improper Control of Generation of Code ('Code Injection') in langflow-ai langflow
Description
Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assistant feature in Langflow executes LLM-generated Python code during its validation phase. Although this phase appears intended to validate generated component code, the implementation reaches dynamic execution sinks and instantiates the generated class server-side. In deployments where an attacker can access the Agentic Assistant feature and influence the model output, this can result in arbitrary server-side Python execution. Version 1.9.0 fixes the issue.
AI-Powered Analysis
Machine-generated threat intelligence
Technical Analysis
Langflow is a platform designed for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assistant feature in langflow executes Python code generated by large language models (LLMs) during its validation phase. This validation phase is intended to verify generated component code but inadvertently reaches dynamic execution sinks, instantiating the generated Python classes server-side. Because the code executed is derived from LLM output, if an attacker can influence the model's output—such as by providing crafted inputs or manipulating the environment—they can inject arbitrary Python code that runs on the server. This represents a classic code injection vulnerability categorized under CWE-94 (Improper Control of Generation of Code). The vulnerability does not require user interaction and can be exploited remotely with low attack complexity, requiring only limited privileges to access the Agentic Assistant feature. The vulnerability has been assigned CVE-2026-33873 with a CVSS 4.0 base score of 9.3, reflecting its critical severity due to high impact on confidentiality, integrity, and availability, and the broad scope of affected systems. The issue was resolved in langflow version 1.9.0 by removing or securing the dynamic execution of LLM-generated code during validation. No public exploits have been observed yet, but the nature of the vulnerability makes it a high-risk target for attackers aiming to gain full control over affected servers.
Potential Impact
The impact of CVE-2026-33873 is severe for organizations using vulnerable versions of langflow. Successful exploitation allows attackers to execute arbitrary Python code on the server hosting langflow, potentially leading to full system compromise. This can result in unauthorized data access, data modification or destruction, deployment of malware or ransomware, lateral movement within networks, and disruption of AI workflows critical to business operations. Given langflow’s role in AI agent orchestration, compromise could also lead to manipulation of AI-driven decision-making processes, causing cascading operational and reputational damage. The vulnerability’s ease of exploitation and lack of required user interaction increase the risk of automated or targeted attacks. Organizations relying on langflow for AI deployments, especially those exposing the Agentic Assistant feature to untrusted users or networks, are at heightened risk. The absence of known exploits in the wild does not diminish the urgency, as proof-of-concept exploits could emerge rapidly given the public disclosure.
Mitigation Recommendations
To mitigate CVE-2026-33873, organizations should immediately upgrade langflow to version 1.9.0 or later, where the vulnerability is fixed. If upgrading is not immediately feasible, restrict access to the Agentic Assistant feature to trusted users and networks only, using network segmentation, firewalls, and strong authentication controls. Disable or remove the Agentic Assistant feature if it is not essential to operations. Implement runtime application self-protection (RASP) or web application firewalls (WAFs) with custom rules to detect and block suspicious Python code execution patterns. Conduct thorough code reviews and security testing of any custom workflows or AI components integrated with langflow. Monitor logs and system behavior for unusual activity indicative of exploitation attempts. Educate developers and administrators about the risks of dynamic code execution from untrusted sources and enforce strict input validation and sanitization practices. Maintain an incident response plan tailored to AI platform compromises to enable rapid containment and recovery.
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- GitHub_M
- Date Reserved
- 2026-03-24T15:10:05.679Z
- Cvss Version
- 4.0
- State
- PUBLISHED
Threat ID: 69c6e8bb3c064ed76ff077be
Added to database: 3/27/2026, 8:29:47 PM
Last enriched: 3/27/2026, 8:45:17 PM
Last updated: 3/28/2026, 1:37:56 AM
Views: 19
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
Need more coverage?
Upgrade to Pro Console for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.
Latest Threats
Check if your credentials are on the dark web
Instant breach scanning across billions of leaked records. Free tier available.