Skip to main content

CVE-2025-5197: CWE-1333 Inefficient Regular Expression Complexity in huggingface huggingface/transformers

Medium
VulnerabilityCVE-2025-5197cvecve-2025-5197cwe-1333
Published: Wed Aug 06 2025 (08/06/2025, 11:53:37 UTC)
Source: CVE Database V5
Vendor/Project: huggingface
Product: huggingface/transformers

Description

A Regular Expression Denial of Service (ReDoS) vulnerability exists in the Hugging Face Transformers library, specifically in the `convert_tf_weight_name_to_pt_weight_name()` function. This function, responsible for converting TensorFlow weight names to PyTorch format, uses a regex pattern `/[^/]*___([^/]*)/` that can be exploited to cause excessive CPU consumption through crafted input strings due to catastrophic backtracking. The vulnerability affects versions up to 4.51.3 and is fixed in version 4.53.0. This issue can lead to service disruption, resource exhaustion, and potential API service vulnerabilities, impacting model conversion processes between TensorFlow and PyTorch formats.

AI-Powered Analysis

AILast updated: 08/06/2025, 12:17:50 UTC

Technical Analysis

CVE-2025-5197 is a Regular Expression Denial of Service (ReDoS) vulnerability identified in the Hugging Face Transformers library, specifically within the function convert_tf_weight_name_to_pt_weight_name(). This function is responsible for converting TensorFlow weight names to PyTorch weight names, a critical step in model conversion workflows between these two popular machine learning frameworks. The vulnerability arises from the use of an inefficient regular expression pattern /[^/]*___([^/]*)/ that can be exploited by specially crafted input strings to cause catastrophic backtracking. Catastrophic backtracking occurs when the regex engine spends excessive CPU cycles attempting to match complex input patterns, leading to significant resource exhaustion. This can cause service disruption or denial of service conditions in applications relying on this function. The issue affects all versions of the huggingface/transformers library up to 4.51.3 and was addressed in version 4.53.0. The CVSS v3.0 base score is 5.3 (medium severity), reflecting that the vulnerability can be exploited remotely without authentication or user interaction, but impacts only availability (no confidentiality or integrity impact). No known exploits are currently reported in the wild. The vulnerability is particularly relevant for organizations that perform automated or on-demand model conversions between TensorFlow and PyTorch formats using the vulnerable library, as it could be triggered by maliciously crafted model weight names or inputs, leading to excessive CPU consumption and potential denial of service. This could affect API services, machine learning pipelines, or cloud-based AI platforms that integrate Hugging Face Transformers for model interoperability.

Potential Impact

For European organizations, the impact of CVE-2025-5197 depends on their adoption of the Hugging Face Transformers library in their AI/ML workflows, especially those involving model conversion between TensorFlow and PyTorch. Organizations providing AI services, cloud ML platforms, or research institutions that automate model conversions could face service disruptions or degraded performance due to CPU exhaustion caused by this vulnerability. This could lead to denial of service conditions affecting availability of AI-driven applications or APIs. While the vulnerability does not compromise data confidentiality or integrity, the availability impact could disrupt critical AI services, delay ML model deployment, and increase operational costs due to resource overuse. In sectors like finance, healthcare, automotive, and telecommunications, where AI models are increasingly integrated into production systems, such disruptions could have cascading effects on business operations and customer experience. Additionally, organizations relying on third-party AI platforms that embed the vulnerable library might be indirectly affected if those platforms are exploited. Given the growing reliance on AI in Europe and regulatory emphasis on service continuity, this vulnerability poses a moderate operational risk.

Mitigation Recommendations

To mitigate CVE-2025-5197, European organizations should: 1) Immediately upgrade the huggingface/transformers library to version 4.53.0 or later, where the inefficient regex pattern has been fixed. 2) Implement input validation and sanitization on any user-supplied or external inputs that influence model weight names or conversion parameters to prevent maliciously crafted strings triggering the regex vulnerability. 3) Monitor CPU usage and performance metrics of services performing model conversions to detect anomalous spikes indicative of exploitation attempts. 4) Employ rate limiting and request throttling on APIs or services that accept model conversion requests to reduce the risk of denial of service. 5) For critical AI pipelines, consider isolating model conversion processes in sandboxed or containerized environments to contain resource exhaustion impacts. 6) Engage with vendors or third-party AI service providers to confirm they have applied the patch or mitigations if their platforms use the affected library. 7) Maintain up-to-date inventories of AI/ML software dependencies to rapidly identify and remediate vulnerable components. These targeted actions go beyond generic advice by focusing on the specific function and context of the vulnerability, emphasizing proactive patching, input control, and operational monitoring.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
@huntr_ai
Date Reserved
2025-05-26T09:26:53.172Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 68934465ad5a09ad00f11e50

Added to database: 8/6/2025, 12:02:45 PM

Last enriched: 8/6/2025, 12:17:50 PM

Last updated: 8/8/2025, 12:34:03 AM

Views: 8

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats