Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2024-58340: CWE-1333 Inefficient Regular Expression Complexity in LangChain AI LangChain

0
High
VulnerabilityCVE-2024-58340cvecve-2024-58340cwe-1333
Published: Mon Jan 12 2026 (01/12/2026, 23:05:00 UTC)
Source: CVE Database V5
Vendor/Project: LangChain AI
Product: LangChain

Description

LangChain versions up to and including 0.3.1 contain a regular expression denial-of-service (ReDoS) vulnerability in the MRKLOutputParser.parse() method (libs/langchain/langchain/agents/mrkl/output_parser.py). The parser applies a backtracking-prone regular expression when extracting tool actions from model output. An attacker who can supply or influence the parsed text (for example via prompt injection in downstream applications that pass LLM output directly into MRKLOutputParser.parse()) can trigger excessive CPU consumption by providing a crafted payload, causing significant parsing delays and a denial-of-service condition.

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 03/05/2026, 09:12:53 UTC

Technical Analysis

CVE-2024-58340 identifies a regular expression denial-of-service (ReDoS) vulnerability in the LangChain AI LangChain library, specifically affecting versions up to and including 0.3.1. The vulnerability resides in the MRKLOutputParser.parse() method located in libs/langchain/langchain/agents/mrkl/output_parser.py. This method employs a regular expression that is prone to excessive backtracking when parsing model-generated output to extract tool actions. An attacker who can supply or influence the input text—commonly through prompt injection attacks in downstream applications that feed large language model (LLM) outputs directly into this parser—can craft malicious payloads designed to exploit the regex inefficiency. When processed, these payloads cause the regex engine to consume excessive CPU resources, leading to significant delays in parsing and potentially causing denial-of-service (DoS) conditions by exhausting system resources. The vulnerability requires no authentication or user interaction, and the attack vector is network accessible, as it depends on influencing the input text processed by the parser. Although no public exploits have been reported, the high CVSS 8.7 score reflects the ease of exploitation and the high impact on availability. The root cause is classified under CWE-1333, which pertains to inefficient regular expression complexity leading to performance degradation. This vulnerability highlights the risks of using complex regex patterns without safeguards in AI-driven parsing components.

Potential Impact

The primary impact of CVE-2024-58340 is a denial-of-service condition caused by excessive CPU consumption during parsing operations. Organizations deploying LangChain in AI workflows, especially those that process untrusted or user-influenced input, face risks of service degradation or outages. This can disrupt critical AI-driven automation, decision-making processes, or customer-facing applications relying on LangChain. The vulnerability could be exploited remotely without authentication, increasing the attack surface. In environments with high concurrency, the resource exhaustion could cascade, affecting other services and causing broader operational impacts. Additionally, the inability to parse outputs efficiently may degrade user experience and reduce trust in AI services. While confidentiality and integrity impacts are not directly implicated, availability degradation alone can have severe business consequences, including financial loss, reputational damage, and compliance issues. The lack of known exploits currently provides a window for proactive mitigation before widespread attacks emerge.

Mitigation Recommendations

To mitigate CVE-2024-58340, organizations should first upgrade LangChain to a version where this vulnerability is patched once available. In the absence of an official patch, developers should consider refactoring or replacing the vulnerable regular expression in MRKLOutputParser.parse() with more efficient parsing logic that avoids backtracking-prone patterns. Implementing input validation and sanitization to restrict or sanitize user-influenced inputs before they reach the parser can reduce attack surface. Employing rate limiting and resource usage monitoring on services that invoke LangChain parsing can help detect and mitigate abuse attempts. Additionally, isolating LangChain processing in resource-constrained environments (e.g., containers with CPU limits) can prevent system-wide resource exhaustion. Security teams should also review application architectures to minimize direct injection of untrusted inputs into AI model outputs that feed into vulnerable parsing components. Finally, monitoring for anomalous CPU spikes and parsing delays can provide early warning of exploitation attempts.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Data Version
5.2
Assigner Short Name
VulnCheck
Date Reserved
2026-01-09T20:28:41.285Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 69658281da2266e838450d22

Added to database: 1/12/2026, 11:23:45 PM

Last enriched: 3/5/2026, 9:12:53 AM

Last updated: 3/25/2026, 1:54:13 AM

Views: 291

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses