Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2026-26013: CWE-918: Server-Side Request Forgery (SSRF) in langchain-ai langchain

0
Low
VulnerabilityCVE-2026-26013cvecve-2026-26013cwe-918
Published: Tue Feb 10 2026 (02/10/2026, 21:51:07 UTC)
Source: CVE Database V5
Vendor/Project: langchain-ai
Product: langchain

Description

LangChain is a framework for building agents and LLM-powered applications. Prior to 1.2.11, the ChatOpenAI.get_num_tokens_from_messages() method fetches arbitrary image_url values without validation when computing token counts for vision-enabled models. This allows attackers to trigger Server-Side Request Forgery (SSRF) attacks by providing malicious image URLs in user input. This vulnerability is fixed in 1.2.11.

AI-Powered Analysis

AILast updated: 02/18/2026, 09:48:32 UTC

Technical Analysis

CVE-2026-26013 is a Server-Side Request Forgery (SSRF) vulnerability identified in the langchain framework, specifically affecting versions prior to 1.2.11. LangChain is a popular framework used to build agents and applications powered by large language models (LLMs), including those with vision capabilities. The vulnerability exists in the ChatOpenAI.get_num_tokens_from_messages() method, which is responsible for calculating token counts from messages. When processing vision-enabled models, this method fetches image URLs provided in user input without proper validation or sanitization. An attacker can exploit this by submitting malicious image URLs that cause the server to initiate HTTP requests to arbitrary locations. This can lead to internal network scanning, unauthorized access to internal services, or denial of service if the server is tricked into making numerous or large requests. The vulnerability is classified under CWE-918 (SSRF). The CVSS v3.1 base score is 3.7, reflecting a low severity primarily due to the requirement that the attacker can supply input that triggers the vulnerable method and the high attack complexity. No privileges or user interaction are required, but the impact on confidentiality and integrity is rated none, with only a low impact on availability. There are no known exploits in the wild at the time of publication. The issue is resolved in langchain version 1.2.11, where input validation and URL fetching logic have been improved to prevent SSRF attacks. Organizations using langchain for LLM-powered applications, especially those integrating vision models, should upgrade to the patched version to eliminate this risk.

Potential Impact

For European organizations, the impact of this SSRF vulnerability is generally low but context-dependent. Organizations using langchain versions prior to 1.2.11 in production AI applications that process user-supplied image URLs are at risk of having their internal networks probed or accessed by attackers. This could expose sensitive internal services or lead to denial of service conditions if exploited at scale. While the vulnerability does not directly compromise data confidentiality or integrity, it can be leveraged as a foothold for further attacks or reconnaissance within internal networks. Industries with sensitive internal infrastructure, such as finance, healthcare, or critical infrastructure, could face increased risk if attackers use SSRF to map internal services or bypass network controls. The lack of known exploits reduces immediate risk, but the widespread adoption of langchain in AI applications means that unpatched deployments could be targeted in the future. The vulnerability's low CVSS score suggests limited direct damage, but its presence in AI frameworks used for automation and decision-making could indirectly affect service availability or trustworthiness.

Mitigation Recommendations

1. Upgrade all langchain deployments to version 1.2.11 or later immediately to apply the official fix that validates and sanitizes image URLs. 2. Implement network-level controls such as egress filtering and web application firewalls (WAFs) to restrict outbound HTTP requests from AI application servers to only trusted destinations. 3. Employ input validation and sanitization at the application layer to reject or sanitize user-supplied URLs before processing. 4. Monitor logs for unusual outbound requests or patterns indicative of SSRF exploitation attempts. 5. Use network segmentation to isolate AI application servers from sensitive internal resources to limit the impact of any SSRF exploitation. 6. Conduct regular security assessments and penetration tests focusing on SSRF and related vulnerabilities in AI-powered services. 7. Educate developers and security teams about SSRF risks in AI frameworks and enforce secure coding practices when handling external resources.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2026-02-09T21:36:29.554Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 698bae314b57a58fa12e3ee6

Added to database: 2/10/2026, 10:16:17 PM

Last enriched: 2/18/2026, 9:48:32 AM

Last updated: 2/21/2026, 12:15:13 AM

Views: 65

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats