Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-68665: CWE-502: Deserialization of Untrusted Data in langchain-ai langchainjs

0
High
VulnerabilityCVE-2025-68665cvecve-2025-68665cwe-502
Published: Tue Dec 23 2025 (12/23/2025, 22:56:04 UTC)
Source: CVE Database V5
Vendor/Project: langchain-ai
Product: langchainjs

Description

CVE-2025-68665 is a high-severity deserialization vulnerability in the LangChain JS framework affecting versions prior to 0. 3. 80/@langchain/core 1. 1. 8 and langchain 1. 2. 3. The flaw arises from improper handling of objects containing the 'lc' key during JSON serialization, allowing attacker-controlled data to be interpreted as legitimate LangChain objects during deserialization. This can lead to remote code execution or other malicious actions without requiring authentication or user interaction. The vulnerability impacts confidentiality but not integrity or availability directly.

AI-Powered Analysis

AILast updated: 12/31/2025, 00:27:22 UTC

Technical Analysis

CVE-2025-68665 is a deserialization of untrusted data vulnerability (CWE-502) found in the LangChain JS framework, a popular tool for building applications powered by large language models (LLMs). The vulnerability exists in versions prior to @langchain/core 0.3.80 and 1.1.8, and langchain 0.3.37 and 1.2.3. The root cause is the toJSON() method's failure to properly escape objects containing the 'lc' key during serialization. The 'lc' key is internally used by LangChain to mark serialized objects. When user-supplied data includes this key, the deserialization process mistakenly treats it as a legitimate LangChain object rather than plain data, enabling serialization injection attacks. This flaw can be exploited remotely without authentication or user interaction, as it involves the processing of crafted JSON data. Exploitation could allow attackers to execute arbitrary code or manipulate application behavior, compromising confidentiality. The vulnerability has a CVSS 3.1 base score of 8.6 (AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:N/A:N), indicating network attack vector, low complexity, no privileges or user interaction required, and a scope change with high confidentiality impact but no integrity or availability impact. No known exploits are reported in the wild yet. The issue has been patched in the specified versions, but unpatched deployments remain vulnerable.

Potential Impact

For European organizations, the vulnerability poses a significant risk to confidentiality of sensitive data processed by LLM-powered applications built using LangChain JS. Attackers exploiting this flaw could gain unauthorized access to internal data or execute arbitrary code within the application context, potentially leading to data breaches or further lateral movement. Given the increasing adoption of AI and LLM frameworks in sectors such as finance, healthcare, and government across Europe, exploitation could disrupt critical services or expose confidential information. The lack of required authentication and user interaction lowers the barrier for attackers, increasing the threat level. However, the vulnerability does not directly affect data integrity or system availability, which somewhat limits the scope of impact. Organizations relying on LangChain JS for AI workflows must consider this a high-priority risk, especially those handling regulated or sensitive data under GDPR and other compliance regimes.

Mitigation Recommendations

1. Immediately upgrade all LangChain JS dependencies to @langchain/core versions 0.3.80 or higher and langchain versions 0.3.37 or higher to apply the official patches. 2. Audit all application code that serializes or deserializes JSON data involving LangChain objects, ensuring no untrusted user input can inject 'lc' keys or other internal markers. 3. Implement strict input validation and sanitization on all user-supplied data before serialization to prevent injection of malicious structures. 4. Employ runtime application self-protection (RASP) or similar monitoring to detect anomalous deserialization activities. 5. Restrict network access to services exposing LangChain-based APIs to trusted sources only, reducing exposure to remote exploitation. 6. Conduct penetration testing focused on deserialization attacks to verify the effectiveness of mitigations. 7. Maintain up-to-date inventories of LangChain versions in use and monitor vendor advisories for future patches or related vulnerabilities.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2025-12-22T23:28:02.917Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 694b21afd0b9012ffd6d18e2

Added to database: 12/23/2025, 11:11:43 PM

Last enriched: 12/31/2025, 12:27:22 AM

Last updated: 2/7/2026, 5:38:16 AM

Views: 241

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats