Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-14920: CWE-502: Deserialization of Untrusted Data in Hugging Face Transformers

0
High
VulnerabilityCVE-2025-14920cvecve-2025-14920cwe-502
Published: Tue Dec 23 2025 (12/23/2025, 21:04:36 UTC)
Source: CVE Database V5
Vendor/Project: Hugging Face
Product: Transformers

Description

CVE-2025-14920 is a high-severity remote code execution vulnerability in the Hugging Face Transformers library, specifically affecting the Perceiver model deserialization process. The flaw arises from improper validation of user-supplied model files, allowing deserialization of untrusted data. Exploitation requires user interaction, such as opening a malicious file or visiting a crafted webpage. Successful exploitation enables attackers to execute arbitrary code with the privileges of the current user. Although no known exploits are reported in the wild, the vulnerability poses significant risks to confidentiality, integrity, and availability. The vulnerability has a CVSS score of 7. 8, reflecting its high impact and relatively low complexity to exploit. European organizations using Hugging Face Transformers in AI/ML workflows, especially those processing untrusted or external model files, are at risk. Mitigation involves strict validation of model inputs, restricting model file sources, and applying patches once available. Countries with strong AI development sectors and extensive use of open-source ML frameworks, such as Germany, France, and the UK, are most likely affected.

AI-Powered Analysis

AILast updated: 12/31/2025, 00:21:24 UTC

Technical Analysis

CVE-2025-14920 is a deserialization vulnerability classified under CWE-502 affecting the Hugging Face Transformers library, specifically the Perceiver model component. The vulnerability stems from the library's failure to properly validate and sanitize user-supplied model files during the deserialization process. Deserialization is the process of converting data from a stored format back into an executable object; if untrusted data is deserialized without validation, it can lead to arbitrary code execution. In this case, an attacker can craft malicious model files that, when loaded by the vulnerable Transformers library, execute arbitrary code within the context of the current user. Exploitation requires user interaction, such as opening a malicious file or visiting a malicious webpage that triggers the loading of the malicious model. The CVSS 3.0 vector (AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H) indicates that the attack requires local access or user interaction but no privileges or complex conditions, and it impacts confidentiality, integrity, and availability to a high degree. Although no public exploits are currently known, the vulnerability poses a serious risk due to the widespread adoption of Hugging Face Transformers in AI and machine learning applications. The lack of patches at the time of disclosure necessitates immediate mitigation through operational controls. The vulnerability was assigned by ZDI (Zero Day Initiative) and published on December 23, 2025.

Potential Impact

For European organizations, the impact of CVE-2025-14920 can be significant, particularly for those leveraging Hugging Face Transformers in AI/ML pipelines, research, or production environments. Successful exploitation could allow attackers to execute arbitrary code, potentially leading to data breaches, system compromise, or disruption of AI services. This could result in loss of sensitive intellectual property, exposure of personal data protected under GDPR, and operational downtime. Organizations relying on automated model loading from external or untrusted sources are especially vulnerable. The confidentiality, integrity, and availability of AI models and associated data can be compromised, undermining trust in AI outputs and causing reputational damage. Given the increasing integration of AI in critical sectors such as finance, healthcare, and manufacturing across Europe, the threat extends beyond IT systems to business continuity and regulatory compliance. The requirement for user interaction limits remote mass exploitation but does not eliminate risk, especially in environments where users frequently handle external model files or visit untrusted sites.

Mitigation Recommendations

1. Immediately restrict the loading of model files to trusted, verified sources only, avoiding any untrusted or external model files. 2. Implement strict input validation and sanitization for all model files before deserialization, employing allowlists or schema validation where possible. 3. Isolate the execution environment for model loading, such as using sandboxing or containerization, to limit the impact of potential code execution. 4. Educate users about the risks of opening untrusted model files or visiting suspicious websites that could trigger malicious model loading. 5. Monitor and audit model loading activities and system behavior for anomalies indicative of exploitation attempts. 6. Apply security updates and patches from Hugging Face promptly once they become available. 7. Consider disabling or limiting features that automatically load or parse model files from external sources. 8. Employ endpoint protection solutions capable of detecting suspicious deserialization or code execution behaviors. 9. Review and harden access controls around AI/ML infrastructure to minimize the privileges of users and processes handling model files. 10. Establish incident response plans specific to AI/ML infrastructure compromise scenarios.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
zdi
Date Reserved
2025-12-18T20:43:16.275Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 694b064e4eddf7475afca16d

Added to database: 12/23/2025, 9:14:54 PM

Last enriched: 12/31/2025, 12:21:24 AM

Last updated: 2/7/2026, 12:43:01 PM

Views: 116

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats