Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2026-33873: CWE-94: Improper Control of Generation of Code ('Code Injection') in langflow-ai langflow

0
Critical
VulnerabilityCVE-2026-33873cvecve-2026-33873cwe-94
Published: Fri Mar 27 2026 (03/27/2026, 20:04:23 UTC)
Source: CVE Database V5
Vendor/Project: langflow-ai
Product: langflow

Description

Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assistant feature in Langflow executes LLM-generated Python code during its validation phase. Although this phase appears intended to validate generated component code, the implementation reaches dynamic execution sinks and instantiates the generated class server-side. In deployments where an attacker can access the Agentic Assistant feature and influence the model output, this can result in arbitrary server-side Python execution. Version 1.9.0 fixes the issue.

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 03/27/2026, 20:45:17 UTC

Technical Analysis

Langflow is a platform designed for building and deploying AI-powered agents and workflows. Prior to version 1.9.0, the Agentic Assistant feature in langflow executes Python code generated by large language models (LLMs) during its validation phase. This validation phase is intended to verify generated component code but inadvertently reaches dynamic execution sinks, instantiating the generated Python classes server-side. Because the code executed is derived from LLM output, if an attacker can influence the model's output—such as by providing crafted inputs or manipulating the environment—they can inject arbitrary Python code that runs on the server. This represents a classic code injection vulnerability categorized under CWE-94 (Improper Control of Generation of Code). The vulnerability does not require user interaction and can be exploited remotely with low attack complexity, requiring only limited privileges to access the Agentic Assistant feature. The vulnerability has been assigned CVE-2026-33873 with a CVSS 4.0 base score of 9.3, reflecting its critical severity due to high impact on confidentiality, integrity, and availability, and the broad scope of affected systems. The issue was resolved in langflow version 1.9.0 by removing or securing the dynamic execution of LLM-generated code during validation. No public exploits have been observed yet, but the nature of the vulnerability makes it a high-risk target for attackers aiming to gain full control over affected servers.

Potential Impact

The impact of CVE-2026-33873 is severe for organizations using vulnerable versions of langflow. Successful exploitation allows attackers to execute arbitrary Python code on the server hosting langflow, potentially leading to full system compromise. This can result in unauthorized data access, data modification or destruction, deployment of malware or ransomware, lateral movement within networks, and disruption of AI workflows critical to business operations. Given langflow’s role in AI agent orchestration, compromise could also lead to manipulation of AI-driven decision-making processes, causing cascading operational and reputational damage. The vulnerability’s ease of exploitation and lack of required user interaction increase the risk of automated or targeted attacks. Organizations relying on langflow for AI deployments, especially those exposing the Agentic Assistant feature to untrusted users or networks, are at heightened risk. The absence of known exploits in the wild does not diminish the urgency, as proof-of-concept exploits could emerge rapidly given the public disclosure.

Mitigation Recommendations

To mitigate CVE-2026-33873, organizations should immediately upgrade langflow to version 1.9.0 or later, where the vulnerability is fixed. If upgrading is not immediately feasible, restrict access to the Agentic Assistant feature to trusted users and networks only, using network segmentation, firewalls, and strong authentication controls. Disable or remove the Agentic Assistant feature if it is not essential to operations. Implement runtime application self-protection (RASP) or web application firewalls (WAFs) with custom rules to detect and block suspicious Python code execution patterns. Conduct thorough code reviews and security testing of any custom workflows or AI components integrated with langflow. Monitor logs and system behavior for unusual activity indicative of exploitation attempts. Educate developers and administrators about the risks of dynamic code execution from untrusted sources and enforce strict input validation and sanitization practices. Maintain an incident response plan tailored to AI platform compromises to enable rapid containment and recovery.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2026-03-24T15:10:05.679Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 69c6e8bb3c064ed76ff077be

Added to database: 3/27/2026, 8:29:47 PM

Last enriched: 3/27/2026, 8:45:17 PM

Last updated: 3/28/2026, 1:37:56 AM

Views: 19

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses