Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2026-34070: CWE-22: Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') in langchain-ai langchain

0
High
VulnerabilityCVE-2026-34070cvecve-2026-34070cwe-22
Published: Tue Mar 31 2026 (03/31/2026, 02:01:49 UTC)
Source: CVE Database V5
Vendor/Project: langchain-ai
Product: langchain

Description

CVE-2026-34070 is a high-severity path traversal vulnerability in langchain versions prior to 1. 2. 22. It affects functions in langchain_core. prompts. loading that read files from user-influenced deserialized config dictionaries without properly validating file paths. An attacker can exploit this flaw by supplying crafted prompt configurations to load_prompt() or load_prompt_from_config(), enabling arbitrary file read on the host system, limited only by file extension checks (. txt, . json, . yaml).

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 03/31/2026, 19:19:30 UTC

Technical Analysis

CVE-2026-34070 is a path traversal vulnerability classified under CWE-22, discovered in the langchain framework, a popular tool for building agents and large language model (LLM)-powered applications. The vulnerability exists in multiple functions within the langchain_core.prompts.loading module, specifically in how they handle file paths embedded in deserialized configuration dictionaries. Prior to version 1.2.22, these functions do not adequately validate or sanitize the file paths, allowing attackers to craft malicious prompt configurations that include directory traversal sequences or absolute paths. When such crafted configurations are passed to load_prompt() or load_prompt_from_config(), the application may read arbitrary files from the host filesystem. Although the vulnerability restricts file reads to certain extensions (.txt for templates, .json and .yaml for examples), this still permits exposure of sensitive information stored in these formats. The vulnerability is remotely exploitable without requiring authentication or user interaction, increasing its risk profile. The flaw was addressed and patched in langchain version 1.2.22, which implements proper validation to prevent directory traversal and absolute path injection attacks. No known exploits have been reported in the wild as of the publication date. The CVSS v3.1 base score is 7.5, indicating a high severity primarily due to the potential for unauthorized disclosure of confidential information.

Potential Impact

The primary impact of CVE-2026-34070 is unauthorized disclosure of sensitive information through arbitrary file reads on systems running vulnerable versions of langchain. Organizations leveraging langchain for LLM applications may inadvertently expose configuration files, credentials, logs, or other sensitive data stored in .txt, .json, or .yaml files. This can lead to data breaches, intellectual property theft, or leakage of internal system details that could facilitate further attacks. Since the vulnerability does not affect integrity or availability, the direct impact is limited to confidentiality. However, the ease of exploitation without authentication or user interaction means attackers can remotely probe and extract data at scale. Enterprises in sectors such as technology, finance, healthcare, and government using langchain in production environments face elevated risk. Additionally, attackers could use the disclosed information to escalate privileges or pivot within networks, amplifying the threat. The lack of known exploits in the wild currently limits immediate widespread impact, but the vulnerability’s presence in a widely adopted AI framework underscores the need for prompt remediation.

Mitigation Recommendations

To mitigate CVE-2026-34070, organizations should immediately upgrade langchain to version 1.2.22 or later, where the vulnerability is patched. For environments where immediate upgrade is not feasible, implement strict input validation and sanitization on any user-supplied prompt configurations before passing them to load_prompt() or load_prompt_from_config(). Employ application-layer controls to restrict file system access and enforce least privilege principles for the application runtime user. Monitor logs for suspicious file access patterns indicative of directory traversal attempts. Consider deploying runtime application self-protection (RASP) or web application firewalls (WAF) with custom rules to detect and block path traversal payloads targeting langchain endpoints. Conduct thorough code reviews and security testing of any custom integrations with langchain to ensure no insecure deserialization or unsafe file handling occurs. Finally, maintain an inventory of all systems running langchain to ensure timely patch management and vulnerability scanning.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2026-03-25T16:21:40.867Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 69cc1e09e6bfc5ba1d33b7e8

Added to database: 3/31/2026, 7:18:33 PM

Last enriched: 3/31/2026, 7:19:30 PM

Last updated: 4/1/2026, 3:52:29 AM

Views: 5

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses