Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection
A critical security flaw has been disclosed in LangChain Core that could be exploited by an attacker to steal sensitive secrets and even influence large language model (LLM) responses through prompt injection. LangChain Core (i.e., langchain-core) is a core Python package that's part of the LangChain ecosystem, providing the core interfaces and model-agnostic abstractions for building
AI Analysis
Technical Summary
LangChain Core, a foundational Python package in the LangChain ecosystem used for building applications powered by large language models (LLMs), suffers from a critical serialization injection vulnerability identified as CVE-2025-68664 with a CVSS score of 9.3. The vulnerability arises from the dumps() and dumpd() functions failing to properly escape user-controlled dictionaries containing the 'lc' key, which LangChain uses internally to mark serialized objects. When deserializing, data containing 'lc' keys is treated as legitimate LangChain objects rather than plain data, allowing attackers to instantiate arbitrary objects within trusted namespaces such as langchain_core, langchain, and langchain_community. This can lead to secret extraction from environment variables if deserialization is performed with the 'secrets_from_env=True' option (previously enabled by default), and potentially arbitrary code execution through Jinja2 template injection. The attack vector commonly involves manipulating LLM response fields like additional_kwargs or response_metadata via prompt injection, which are then serialized and deserialized in orchestration loops. The vulnerability affects LangChain Core versions >=1.0.0 and <1.2.5 (fixed in 1.2.5) and earlier versions <0.3.81 (fixed in 0.3.81). A similar vulnerability exists in LangChain.js (CVE-2025-68665), affecting npm packages with fixes in recent versions. The LangChain maintainers released patches that introduce an allowlist parameter 'allowed_objects' to restrict deserialization to safe classes, disable Jinja2 templates by default, and set 'secrets_from_env' to false to prevent automatic secret loading. This vulnerability represents a convergence of AI and classic security risks, where untrusted LLM outputs can lead to severe security breaches if not properly handled.
Potential Impact
For European organizations leveraging LangChain Core in AI-driven applications, this vulnerability poses significant risks including unauthorized disclosure of sensitive secrets such as environment variables, manipulation of AI model outputs through prompt injection, and potential arbitrary code execution within trusted application contexts. The breach of confidentiality could lead to exposure of credentials, API keys, or proprietary data, undermining compliance with GDPR and other data protection regulations. Integrity of AI responses can be compromised, affecting decision-making processes and automated workflows dependent on LLM outputs. Availability might be impacted if attackers execute malicious code to disrupt services. Organizations integrating LangChain in critical sectors such as finance, healthcare, and government are particularly vulnerable due to the sensitivity of data and regulatory scrutiny. The ease of exploitation via untrusted LLM outputs and the widespread adoption of LangChain in AI development amplify the threat's severity. Failure to patch promptly could lead to targeted attacks exploiting serialization injection to escalate privileges or pivot within networks.
Mitigation Recommendations
European organizations should immediately upgrade LangChain Core to version 1.2.5 or later (or 0.3.81 for earlier branches) to apply the official patch addressing CVE-2025-68664. Disable the 'secrets_from_env' option to prevent automatic loading of environment secrets during deserialization. Implement strict allowlists for deserializable classes using the 'allowed_objects' parameter to restrict object instantiation to safe, known types. Block or sanitize Jinja2 templates in user inputs to prevent template injection attacks. Conduct thorough input validation and sanitization on all user-controllable fields, especially those influencing LLM response metadata or additional_kwargs. Monitor LLM orchestration loops for suspicious serialization/deserialization activities and anomalous object instantiations. Employ runtime application self-protection (RASP) or behavior-based anomaly detection to identify exploitation attempts. Review and harden environment variable management to minimize sensitive data exposure. Educate developers on secure handling of AI model outputs and the risks of treating LLM responses as trusted inputs. Finally, audit dependencies for similar vulnerabilities, including LangChain.js, and apply corresponding patches.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Denmark, Belgium, Italy, Spain
Critical LangChain Core Vulnerability Exposes Secrets via Serialization Injection
Description
A critical security flaw has been disclosed in LangChain Core that could be exploited by an attacker to steal sensitive secrets and even influence large language model (LLM) responses through prompt injection. LangChain Core (i.e., langchain-core) is a core Python package that's part of the LangChain ecosystem, providing the core interfaces and model-agnostic abstractions for building
AI-Powered Analysis
Technical Analysis
LangChain Core, a foundational Python package in the LangChain ecosystem used for building applications powered by large language models (LLMs), suffers from a critical serialization injection vulnerability identified as CVE-2025-68664 with a CVSS score of 9.3. The vulnerability arises from the dumps() and dumpd() functions failing to properly escape user-controlled dictionaries containing the 'lc' key, which LangChain uses internally to mark serialized objects. When deserializing, data containing 'lc' keys is treated as legitimate LangChain objects rather than plain data, allowing attackers to instantiate arbitrary objects within trusted namespaces such as langchain_core, langchain, and langchain_community. This can lead to secret extraction from environment variables if deserialization is performed with the 'secrets_from_env=True' option (previously enabled by default), and potentially arbitrary code execution through Jinja2 template injection. The attack vector commonly involves manipulating LLM response fields like additional_kwargs or response_metadata via prompt injection, which are then serialized and deserialized in orchestration loops. The vulnerability affects LangChain Core versions >=1.0.0 and <1.2.5 (fixed in 1.2.5) and earlier versions <0.3.81 (fixed in 0.3.81). A similar vulnerability exists in LangChain.js (CVE-2025-68665), affecting npm packages with fixes in recent versions. The LangChain maintainers released patches that introduce an allowlist parameter 'allowed_objects' to restrict deserialization to safe classes, disable Jinja2 templates by default, and set 'secrets_from_env' to false to prevent automatic secret loading. This vulnerability represents a convergence of AI and classic security risks, where untrusted LLM outputs can lead to severe security breaches if not properly handled.
Potential Impact
For European organizations leveraging LangChain Core in AI-driven applications, this vulnerability poses significant risks including unauthorized disclosure of sensitive secrets such as environment variables, manipulation of AI model outputs through prompt injection, and potential arbitrary code execution within trusted application contexts. The breach of confidentiality could lead to exposure of credentials, API keys, or proprietary data, undermining compliance with GDPR and other data protection regulations. Integrity of AI responses can be compromised, affecting decision-making processes and automated workflows dependent on LLM outputs. Availability might be impacted if attackers execute malicious code to disrupt services. Organizations integrating LangChain in critical sectors such as finance, healthcare, and government are particularly vulnerable due to the sensitivity of data and regulatory scrutiny. The ease of exploitation via untrusted LLM outputs and the widespread adoption of LangChain in AI development amplify the threat's severity. Failure to patch promptly could lead to targeted attacks exploiting serialization injection to escalate privileges or pivot within networks.
Mitigation Recommendations
European organizations should immediately upgrade LangChain Core to version 1.2.5 or later (or 0.3.81 for earlier branches) to apply the official patch addressing CVE-2025-68664. Disable the 'secrets_from_env' option to prevent automatic loading of environment secrets during deserialization. Implement strict allowlists for deserializable classes using the 'allowed_objects' parameter to restrict object instantiation to safe, known types. Block or sanitize Jinja2 templates in user inputs to prevent template injection attacks. Conduct thorough input validation and sanitization on all user-controllable fields, especially those influencing LLM response metadata or additional_kwargs. Monitor LLM orchestration loops for suspicious serialization/deserialization activities and anomalous object instantiations. Employ runtime application self-protection (RASP) or behavior-based anomaly detection to identify exploitation attempts. Review and harden environment variable management to minimize sensitive data exposure. Educate developers on secure handling of AI model outputs and the risks of treating LLM responses as trusted inputs. Finally, audit dependencies for similar vulnerabilities, including LangChain.js, and apply corresponding patches.
Technical Details
- Article Source
- {"url":"https://thehackernews.com/2025/12/critical-langchain-core-vulnerability.html","fetched":true,"fetchedAt":"2025-12-26T10:35:14.540Z","wordCount":1134}
Threat ID: 694e64e4ddaad31618adf431
Added to database: 12/26/2025, 10:35:16 AM
Last enriched: 12/26/2025, 10:35:30 AM
Last updated: 12/26/2025, 6:18:52 PM
Views: 10
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
FreeBSD rtsold 15.x - Remote Code Execution via DNSSL
CriticalCISA Flags Actively Exploited Digiever NVR Vulnerability Allowing Remote Code Execution
CriticalCritical n8n Flaw (CVSS 9.9) Enables Arbitrary Code Execution Across Thousands of Instances
CriticalWatchGuard Patches Firebox Zero-Day Exploited in the Wild
CriticalWatchGuard Warns of Active Exploitation of Critical Fireware OS VPN Vulnerability
CriticalActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need more coverage?
Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.