Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-68664: CWE-502: Deserialization of Untrusted Data in langchain-ai langchain

0
Critical
VulnerabilityCVE-2025-68664cvecve-2025-68664cwe-502
Published: Tue Dec 23 2025 (12/23/2025, 22:47:44 UTC)
Source: CVE Database V5
Vendor/Project: langchain-ai
Product: langchain

Description

LangChain is a framework for building agents and LLM-powered applications. Prior to versions 0.3.81 and 1.2.5, a serialization injection vulnerability exists in LangChain's dumps() and dumpd() functions. The functions do not escape dictionaries with 'lc' keys when serializing free-form dictionaries. The 'lc' key is used internally by LangChain to mark serialized objects. When user-controlled data contains this key structure, it is treated as a legitimate LangChain object during deserialization rather than plain user data. This issue has been patched in versions 0.3.81 and 1.2.5.

AI-Powered Analysis

AILast updated: 12/23/2025, 23:11:42 UTC

Technical Analysis

LangChain is a popular framework for building applications powered by large language models (LLMs) and intelligent agents. The vulnerability CVE-2025-68664 is a deserialization of untrusted data flaw classified under CWE-502. It exists in the dumps() and dumpd() serialization functions of LangChain versions prior to 0.3.81 and 1.2.5. These functions fail to properly escape or sanitize dictionaries containing the 'lc' key, which LangChain uses internally to identify serialized objects. When user-controlled input includes this 'lc' key, the deserialization process mistakenly treats it as a legitimate LangChain object rather than plain data. This allows an attacker to craft malicious serialized payloads that can execute arbitrary code or manipulate the application’s internal state during deserialization. The vulnerability is remotely exploitable without requiring authentication or user interaction, increasing its risk profile. The CVSS 3.1 score of 9.3 reflects its critical nature, with network attack vector, low attack complexity, no privileges required, and no user interaction needed. The impact primarily affects confidentiality, allowing attackers to access sensitive data, with some integrity impact but no availability impact. Although no exploits have been reported in the wild yet, the vulnerability’s characteristics make it a high-priority issue for organizations using LangChain in production environments. The flaw was publicly disclosed on December 23, 2025, and patched in versions 0.3.81 and 1.2.5. Organizations should verify their LangChain versions and update immediately to mitigate risk.

Potential Impact

For European organizations, the impact of CVE-2025-68664 is significant, especially for those leveraging LangChain in AI-driven applications, automated agents, or data processing pipelines. Successful exploitation can lead to unauthorized disclosure of sensitive information, including proprietary data, user inputs, or internal model parameters, compromising confidentiality. The ability to execute arbitrary code or manipulate deserialization logic may also allow attackers to bypass security controls or escalate privileges within affected systems. Sectors such as finance, healthcare, research institutions, and technology companies that rely on AI frameworks are particularly vulnerable. The vulnerability could disrupt trust in AI applications and lead to regulatory compliance issues under GDPR if personal data is exposed. Given the critical CVSS score and the lack of required authentication, the threat can propagate rapidly if unpatched systems are exposed to the internet or untrusted inputs. Although no known exploits exist currently, the potential for automated exploitation tools to emerge is high, increasing the urgency for mitigation.

Mitigation Recommendations

1. Immediately upgrade all LangChain deployments to version 0.3.81 or 1.2.5 or later, where the vulnerability is patched. 2. Audit all code paths that utilize LangChain’s dumps() and dumpd() functions to ensure no untrusted user input is serialized without proper validation or sanitization. 3. Implement strict input validation and sanitization to prevent injection of the 'lc' key or similarly structured malicious payloads in user-controlled data. 4. Employ runtime application self-protection (RASP) or behavior monitoring to detect anomalous deserialization activities. 5. Restrict network exposure of services using LangChain serialization to trusted internal networks only, minimizing attack surface. 6. Conduct thorough security testing and code reviews focusing on serialization and deserialization logic. 7. Monitor threat intelligence feeds for emerging exploit attempts targeting this vulnerability. 8. Prepare incident response plans specifically addressing deserialization attacks to enable rapid containment if exploitation occurs.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2025-12-22T23:28:02.917Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 694b1e31d0b9012ffd688bf5

Added to database: 12/23/2025, 10:56:49 PM

Last enriched: 12/23/2025, 11:11:42 PM

Last updated: 12/24/2025, 2:48:46 AM

Views: 16

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats