Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-14921: CWE-502: Deserialization of Untrusted Data in Hugging Face Transformers

0
High
VulnerabilityCVE-2025-14921cvecve-2025-14921cwe-502
Published: Tue Dec 23 2025 (12/23/2025, 21:04:23 UTC)
Source: CVE Database V5
Vendor/Project: Hugging Face
Product: Transformers

Description

CVE-2025-14921 is a high-severity remote code execution vulnerability in the Hugging Face Transformers library, specifically affecting the Transformer-XL model deserialization process. The flaw arises from improper validation of user-supplied model files, enabling deserialization of untrusted data. Exploitation requires user interaction, such as visiting a malicious webpage or opening a crafted file, allowing attackers to execute arbitrary code with the privileges of the current user. The vulnerability impacts confidentiality, integrity, and availability of affected systems. No known exploits are currently reported in the wild. European organizations using vulnerable versions of Hugging Face Transformers are at risk, especially those integrating Transformer-XL models in their AI workflows. Mitigation involves strict validation of model files, restricting model sources, and applying updates once patches become available. Countries with significant AI development and adoption, such as Germany, France, and the UK, are more likely to be affected due to higher usage of these tools.

AI-Powered Analysis

AILast updated: 12/31/2025, 00:21:37 UTC

Technical Analysis

CVE-2025-14921 is a deserialization vulnerability classified under CWE-502 found in the Hugging Face Transformers library, specifically impacting the Transformer-XL model. The vulnerability stems from the library's failure to properly validate and sanitize user-supplied model files during the deserialization process. Deserialization is a process where data is converted from a stored format back into an object in memory; if untrusted data is deserialized without validation, it can lead to arbitrary code execution. In this case, an attacker can craft malicious model files that, when loaded by the vulnerable Transformer-XL implementation, execute arbitrary code on the host system. The attack vector requires user interaction, such as opening a malicious file or visiting a malicious webpage that triggers the loading of the compromised model. The vulnerability has a CVSS 3.0 score of 7.8, indicating high severity, with attack vector local, low attack complexity, no privileges required, but user interaction necessary. The impact includes full compromise of confidentiality, integrity, and availability of the system running the vulnerable library. Although no exploits are currently known in the wild, the widespread use of Hugging Face Transformers in AI and ML workflows makes this a significant risk. The vulnerability was assigned by ZDI (ZDI-CAN-25424) and published on December 23, 2025. No patches are currently linked, so organizations must monitor for updates from Hugging Face.

Potential Impact

For European organizations, this vulnerability poses a significant risk to AI and machine learning infrastructures that utilize Hugging Face Transformers, particularly the Transformer-XL model. Successful exploitation could lead to remote code execution, allowing attackers to gain control over affected systems, steal sensitive data, manipulate AI model outputs, or disrupt services. This could impact sectors relying heavily on AI, such as finance, healthcare, automotive, and research institutions. The requirement for user interaction limits mass exploitation but targeted attacks, such as spear-phishing or supply chain attacks, remain a concern. Given the increasing adoption of AI technologies in Europe, the potential for data breaches, intellectual property theft, and operational disruption is substantial. Additionally, compromised AI models could lead to erroneous outputs, affecting decision-making processes and compliance with regulations like GDPR.

Mitigation Recommendations

1. Immediately restrict the sources of Transformer-XL model files to trusted repositories and verified providers to minimize exposure to malicious files. 2. Implement strict validation and integrity checks (e.g., cryptographic signatures or hashes) on all model files before loading them into the Transformers library. 3. Educate users and developers about the risks of opening untrusted model files or visiting unverified URLs that could trigger model loading. 4. Monitor official Hugging Face channels for patches or updates addressing this vulnerability and apply them promptly once available. 5. Employ runtime application self-protection (RASP) or endpoint detection and response (EDR) solutions to detect anomalous behaviors indicative of exploitation attempts. 6. Isolate AI workloads in sandboxed or containerized environments to limit the impact of potential code execution. 7. Review and harden user privileges to minimize the damage from code execution under user context. 8. Conduct regular security audits of AI pipelines and dependencies to identify and remediate similar deserialization risks.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
zdi
Date Reserved
2025-12-18T20:43:19.963Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 694b064e4eddf7475afca170

Added to database: 12/23/2025, 9:14:54 PM

Last enriched: 12/31/2025, 12:21:37 AM

Last updated: 2/7/2026, 3:56:40 AM

Views: 36

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats