CVE-2026-34046: CWE-639: Authorization Bypass Through User-Controlled Key in langflow-ai langflow
Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.5.1, the `_read_flow` helper in `src/backend/base/langflow/api/v1/flows.py` branched on the `AUTO_LOGIN` setting to decide whether to filter by `user_id`. When `AUTO_LOGIN` was `False` (i.e., authentication was enabled), neither branch enforced an ownership check — the query returned any flow matching the given UUID regardless of who owned it. This allowed any authenticated user to read any other user's flow, including embedded plaintext API keys; modify the logic of another user's AI agents, and/or delete flows belonging to other users. The vulnerability was introduced by the conditional logic that was meant to accommodate public/example flows (those with `user_id = NULL`) under auto-login mode, but inadvertently left the authenticated path without an ownership filter. The fix in version 1.5.1 removes the `AUTO_LOGIN` conditional entirely and unconditionally scopes the query to the requesting user.
AI Analysis
Technical Summary
CVE-2026-34046 is an authorization bypass vulnerability identified in langflow, a tool used for building and deploying AI-powered agents and workflows. The vulnerability exists in the _read_flow helper function located in src/backend/base/langflow/api/v1/flows.py. Prior to version 1.5.1, this function used conditional logic based on the AUTO_LOGIN setting to determine whether to filter flows by user ownership (user_id). When AUTO_LOGIN was set to False, meaning authentication was enabled, the function failed to enforce ownership checks, allowing any authenticated user to retrieve flows by UUID regardless of ownership. This flaw permitted unauthorized users to read other users' flows, which could contain sensitive information such as plaintext API keys, modify the logic of AI agents owned by others, or delete their flows. The root cause was the attempt to accommodate public/example flows (with user_id set to NULL) under auto-login mode, but this inadvertently left the authenticated path without proper filtering. The fix implemented in version 1.5.1 removes the AUTO_LOGIN conditional entirely and enforces unconditional scoping of flow queries to the requesting user, thereby restoring proper authorization controls. The vulnerability carries a CVSS 4.0 score of 8.7, reflecting its high severity due to network exploitability, low attack complexity, no required privileges beyond authentication, and no user interaction needed. Although no known exploits are reported in the wild, the impact of unauthorized access to AI workflows and embedded credentials is significant.
Potential Impact
This vulnerability poses a serious risk to organizations using langflow for AI workflow management. Unauthorized access to other users' flows can lead to exposure of sensitive data such as plaintext API keys, which can be leveraged to compromise other systems or cloud services. Attackers can also modify or delete AI agent logic, potentially disrupting automated processes or causing incorrect AI behavior, leading to operational failures or business process corruption. The ability to manipulate AI workflows undermines data integrity and availability, while unauthorized read access compromises confidentiality. Given that exploitation requires only authenticated access, insider threats or compromised user accounts can easily leverage this flaw. Organizations relying on langflow for critical AI deployments may face data breaches, service disruptions, and loss of trust. The vulnerability's network accessibility and lack of user interaction requirements increase the likelihood of exploitation in multi-tenant or collaborative environments.
Mitigation Recommendations
Organizations should immediately upgrade langflow to version 1.5.1 or later, where the vulnerability is fixed by enforcing strict user ownership checks on flow queries. Until patching is possible, implement compensating controls such as restricting access to langflow instances to trusted users only and monitoring user activity for unusual access patterns to other users' flows. Employ strong authentication mechanisms and consider multi-factor authentication to reduce risk from compromised credentials. Review and audit stored AI workflows for exposure of sensitive information like API keys and rotate any potentially leaked credentials. Limit the use of shared or public flows to minimize the attack surface. Additionally, conduct regular code reviews and penetration testing focused on authorization logic to detect similar flaws. Network segmentation and access controls can further reduce exposure of vulnerable langflow instances to untrusted networks.
Affected Countries
United States, Germany, United Kingdom, Canada, France, Australia, Japan, South Korea, Netherlands, Sweden
CVE-2026-34046: CWE-639: Authorization Bypass Through User-Controlled Key in langflow-ai langflow
Description
Langflow is a tool for building and deploying AI-powered agents and workflows. Prior to version 1.5.1, the `_read_flow` helper in `src/backend/base/langflow/api/v1/flows.py` branched on the `AUTO_LOGIN` setting to decide whether to filter by `user_id`. When `AUTO_LOGIN` was `False` (i.e., authentication was enabled), neither branch enforced an ownership check — the query returned any flow matching the given UUID regardless of who owned it. This allowed any authenticated user to read any other user's flow, including embedded plaintext API keys; modify the logic of another user's AI agents, and/or delete flows belonging to other users. The vulnerability was introduced by the conditional logic that was meant to accommodate public/example flows (those with `user_id = NULL`) under auto-login mode, but inadvertently left the authenticated path without an ownership filter. The fix in version 1.5.1 removes the `AUTO_LOGIN` conditional entirely and unconditionally scopes the query to the requesting user.
AI-Powered Analysis
Machine-generated threat intelligence
Technical Analysis
CVE-2026-34046 is an authorization bypass vulnerability identified in langflow, a tool used for building and deploying AI-powered agents and workflows. The vulnerability exists in the _read_flow helper function located in src/backend/base/langflow/api/v1/flows.py. Prior to version 1.5.1, this function used conditional logic based on the AUTO_LOGIN setting to determine whether to filter flows by user ownership (user_id). When AUTO_LOGIN was set to False, meaning authentication was enabled, the function failed to enforce ownership checks, allowing any authenticated user to retrieve flows by UUID regardless of ownership. This flaw permitted unauthorized users to read other users' flows, which could contain sensitive information such as plaintext API keys, modify the logic of AI agents owned by others, or delete their flows. The root cause was the attempt to accommodate public/example flows (with user_id set to NULL) under auto-login mode, but this inadvertently left the authenticated path without proper filtering. The fix implemented in version 1.5.1 removes the AUTO_LOGIN conditional entirely and enforces unconditional scoping of flow queries to the requesting user, thereby restoring proper authorization controls. The vulnerability carries a CVSS 4.0 score of 8.7, reflecting its high severity due to network exploitability, low attack complexity, no required privileges beyond authentication, and no user interaction needed. Although no known exploits are reported in the wild, the impact of unauthorized access to AI workflows and embedded credentials is significant.
Potential Impact
This vulnerability poses a serious risk to organizations using langflow for AI workflow management. Unauthorized access to other users' flows can lead to exposure of sensitive data such as plaintext API keys, which can be leveraged to compromise other systems or cloud services. Attackers can also modify or delete AI agent logic, potentially disrupting automated processes or causing incorrect AI behavior, leading to operational failures or business process corruption. The ability to manipulate AI workflows undermines data integrity and availability, while unauthorized read access compromises confidentiality. Given that exploitation requires only authenticated access, insider threats or compromised user accounts can easily leverage this flaw. Organizations relying on langflow for critical AI deployments may face data breaches, service disruptions, and loss of trust. The vulnerability's network accessibility and lack of user interaction requirements increase the likelihood of exploitation in multi-tenant or collaborative environments.
Mitigation Recommendations
Organizations should immediately upgrade langflow to version 1.5.1 or later, where the vulnerability is fixed by enforcing strict user ownership checks on flow queries. Until patching is possible, implement compensating controls such as restricting access to langflow instances to trusted users only and monitoring user activity for unusual access patterns to other users' flows. Employ strong authentication mechanisms and consider multi-factor authentication to reduce risk from compromised credentials. Review and audit stored AI workflows for exposure of sensitive information like API keys and rotate any potentially leaked credentials. Limit the use of shared or public flows to minimize the attack surface. Additionally, conduct regular code reviews and penetration testing focused on authorization logic to detect similar flaws. Network segmentation and access controls can further reduce exposure of vulnerable langflow instances to untrusted networks.
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- GitHub_M
- Date Reserved
- 2026-03-25T15:29:04.745Z
- Cvss Version
- 4.0
- State
- PUBLISHED
Threat ID: 69c6e8bb3c064ed76ff077d0
Added to database: 3/27/2026, 8:29:47 PM
Last enriched: 3/27/2026, 8:44:51 PM
Last updated: 3/28/2026, 1:36:11 AM
Views: 6
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
Need more coverage?
Upgrade to Pro Console for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.
Latest Threats
Check if your credentials are on the dark web
Instant breach scanning across billions of leaked records. Free tier available.