CVE-2025-6051: CWE-1333 Inefficient Regular Expression Complexity in huggingface huggingface/transformers
A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the `normalize_numbers()` method of the `EnglishNormalizer` class. This vulnerability affects versions up to 4.52.4 and is fixed in version 4.53.0. The issue arises from the method's handling of numeric strings, which can be exploited using crafted input strings containing long sequences of digits, leading to excessive CPU consumption. This vulnerability impacts text-to-speech and number normalization tasks, potentially causing service disruption, resource exhaustion, and API vulnerabilities.
AI Analysis
Technical Summary
CVE-2025-6051 is a Regular Expression Denial of Service (ReDoS) vulnerability identified in the Hugging Face Transformers library, specifically within the `normalize_numbers()` method of the `EnglishNormalizer` class. This vulnerability affects all versions up to 4.52.4 and was addressed in version 4.53.0. The root cause lies in inefficient regular expression handling of numeric strings, where crafted input containing long sequences of digits can trigger catastrophic backtracking in the regex engine. This leads to excessive CPU consumption and potential service disruption. The affected functionality is primarily related to text-to-speech and number normalization tasks, which are common in natural language processing (NLP) pipelines. An attacker can exploit this vulnerability by submitting specially crafted input strings to APIs or services that utilize the vulnerable method, causing resource exhaustion and denial of service without requiring authentication or user interaction. The CVSS score of 5.3 (medium severity) reflects the network vector, low attack complexity, no privileges or user interaction required, and an impact limited to availability (no confidentiality or integrity impact). No known exploits are reported in the wild yet, but the vulnerability poses a risk to any system relying on the affected versions of the Hugging Face Transformers library for processing numeric text data.
Potential Impact
For European organizations, the impact of this vulnerability can be significant, especially for those deploying NLP services that utilize the Hugging Face Transformers library for text normalization or text-to-speech applications. Potential impacts include service outages or degraded performance due to CPU exhaustion, which can disrupt customer-facing applications, internal automation, or AI-driven analytics. This can lead to operational downtime, loss of productivity, and reputational damage. Since the vulnerability does not compromise data confidentiality or integrity, the primary concern is availability. Organizations offering AI-based services or APIs that process user input with the vulnerable library are at risk of denial of service attacks. This could affect sectors such as finance, healthcare, telecommunications, and public services, where NLP tools are increasingly integrated. Additionally, the vulnerability could be leveraged as part of a larger attack chain to cause distraction or resource depletion during targeted attacks.
Mitigation Recommendations
European organizations should immediately upgrade any deployments of the Hugging Face Transformers library to version 4.53.0 or later, where the vulnerability is fixed. If upgrading is not immediately feasible, implement input validation and sanitization to detect and reject unusually long numeric sequences before they reach the vulnerable `normalize_numbers()` method. Rate limiting and anomaly detection on API endpoints processing text normalization can help mitigate exploitation attempts. Monitoring CPU usage and setting resource limits on services using the library can prevent system-wide impact. Additionally, organizations should review their NLP pipelines to identify any indirect dependencies on the vulnerable library and ensure they are updated accordingly. Incorporating fuzz testing and regular expression complexity analysis into the development lifecycle can help detect similar issues proactively. Finally, maintain awareness of vendor advisories and community updates regarding this and related vulnerabilities.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Denmark, Belgium, Italy, Spain
CVE-2025-6051: CWE-1333 Inefficient Regular Expression Complexity in huggingface huggingface/transformers
Description
A Regular Expression Denial of Service (ReDoS) vulnerability was discovered in the Hugging Face Transformers library, specifically within the `normalize_numbers()` method of the `EnglishNormalizer` class. This vulnerability affects versions up to 4.52.4 and is fixed in version 4.53.0. The issue arises from the method's handling of numeric strings, which can be exploited using crafted input strings containing long sequences of digits, leading to excessive CPU consumption. This vulnerability impacts text-to-speech and number normalization tasks, potentially causing service disruption, resource exhaustion, and API vulnerabilities.
AI-Powered Analysis
Technical Analysis
CVE-2025-6051 is a Regular Expression Denial of Service (ReDoS) vulnerability identified in the Hugging Face Transformers library, specifically within the `normalize_numbers()` method of the `EnglishNormalizer` class. This vulnerability affects all versions up to 4.52.4 and was addressed in version 4.53.0. The root cause lies in inefficient regular expression handling of numeric strings, where crafted input containing long sequences of digits can trigger catastrophic backtracking in the regex engine. This leads to excessive CPU consumption and potential service disruption. The affected functionality is primarily related to text-to-speech and number normalization tasks, which are common in natural language processing (NLP) pipelines. An attacker can exploit this vulnerability by submitting specially crafted input strings to APIs or services that utilize the vulnerable method, causing resource exhaustion and denial of service without requiring authentication or user interaction. The CVSS score of 5.3 (medium severity) reflects the network vector, low attack complexity, no privileges or user interaction required, and an impact limited to availability (no confidentiality or integrity impact). No known exploits are reported in the wild yet, but the vulnerability poses a risk to any system relying on the affected versions of the Hugging Face Transformers library for processing numeric text data.
Potential Impact
For European organizations, the impact of this vulnerability can be significant, especially for those deploying NLP services that utilize the Hugging Face Transformers library for text normalization or text-to-speech applications. Potential impacts include service outages or degraded performance due to CPU exhaustion, which can disrupt customer-facing applications, internal automation, or AI-driven analytics. This can lead to operational downtime, loss of productivity, and reputational damage. Since the vulnerability does not compromise data confidentiality or integrity, the primary concern is availability. Organizations offering AI-based services or APIs that process user input with the vulnerable library are at risk of denial of service attacks. This could affect sectors such as finance, healthcare, telecommunications, and public services, where NLP tools are increasingly integrated. Additionally, the vulnerability could be leveraged as part of a larger attack chain to cause distraction or resource depletion during targeted attacks.
Mitigation Recommendations
European organizations should immediately upgrade any deployments of the Hugging Face Transformers library to version 4.53.0 or later, where the vulnerability is fixed. If upgrading is not immediately feasible, implement input validation and sanitization to detect and reject unusually long numeric sequences before they reach the vulnerable `normalize_numbers()` method. Rate limiting and anomaly detection on API endpoints processing text normalization can help mitigate exploitation attempts. Monitoring CPU usage and setting resource limits on services using the library can prevent system-wide impact. Additionally, organizations should review their NLP pipelines to identify any indirect dependencies on the vulnerable library and ensure they are updated accordingly. Incorporating fuzz testing and regular expression complexity analysis into the development lifecycle can help detect similar issues proactively. Finally, maintain awareness of vendor advisories and community updates regarding this and related vulnerabilities.
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- @huntr_ai
- Date Reserved
- 2025-06-13T10:39:33.128Z
- Cvss Version
- 3.0
- State
- PUBLISHED
Threat ID: 68c6f734ed64d8647ec09549
Added to database: 9/14/2025, 5:11:16 PM
Last enriched: 9/22/2025, 12:38:03 AM
Last updated: 2/5/2026, 6:04:01 AM
Views: 158
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
CVE-2025-15080: CWE-1284 Improper Validation of Specified Quantity in Input in Mitsubishi Electric Corporation MELSEC iQ-R Series R08PCPU
HighCVE-2025-61732: CWE-94: Improper Control of Generation of Code ('Code Injection') in Go toolchain cmd/cgo
HighCVE-2025-10314: CWE-276 Incorrect Default Permissions in Mitsubishi Electric Corporation FREQSHIP-mini for Windows
HighCVE-2025-11730: CWE-78 Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') in Zyxel ATP series firmware
HighCVE-2026-1898: Improper Access Controls in WeKan
MediumActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
Need more coverage?
Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.