Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-14924: CWE-502: Deserialization of Untrusted Data in Hugging Face Transformers

0
High
VulnerabilityCVE-2025-14924cvecve-2025-14924cwe-502
Published: Tue Dec 23 2025 (12/23/2025, 21:04:40 UTC)
Source: CVE Database V5
Vendor/Project: Hugging Face
Product: Transformers

Description

Hugging Face Transformers megatron_gpt2 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of checkpoints. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-27984.

AI-Powered Analysis

AILast updated: 12/23/2025, 21:19:41 UTC

Technical Analysis

CVE-2025-14924 is a deserialization vulnerability classified under CWE-502 found in the Hugging Face Transformers library, specifically in the megatron_gpt2 model checkpoint parsing functionality. The vulnerability stems from the lack of proper validation of user-supplied checkpoint data, which is deserialized without sufficient checks, allowing an attacker to inject malicious serialized objects. When a user interacts with a malicious checkpoint—either by visiting a crafted webpage or opening a malicious file—the deserialization process can lead to arbitrary code execution within the context of the running process. The vulnerability requires user interaction (UI:R) but does not require any privileges or authentication, making it accessible to remote attackers who can trick users into loading malicious data. The CVSS v3.0 score of 7.8 reflects high severity due to the potential for full compromise of confidentiality, integrity, and availability of affected systems. The issue is particularly critical in environments where Hugging Face Transformers are used for AI/ML model deployment, as attackers could execute code to manipulate model behavior, exfiltrate sensitive data, or disrupt services. No patches are currently linked, and no known exploits have been reported in the wild, but the vulnerability was publicly disclosed on December 23, 2025, and assigned by ZDI as ZDI-CAN-27984.

Potential Impact

For European organizations, the impact of this vulnerability is significant, especially for those leveraging Hugging Face Transformers in AI research, development, and production environments. Successful exploitation could lead to remote code execution, allowing attackers to compromise machine learning infrastructure, steal intellectual property, manipulate AI model outputs, or disrupt critical AI-driven services. This could affect sectors such as finance, healthcare, automotive, and telecommunications, where AI models are increasingly integrated. The requirement for user interaction limits automated exploitation but does not eliminate risk, as social engineering or malicious content delivery can trigger the vulnerability. Additionally, compromised AI systems could undermine trust in AI outputs and cause regulatory and compliance issues under GDPR and other European data protection laws. The lack of available patches increases exposure time, emphasizing the need for immediate mitigation.

Mitigation Recommendations

European organizations should implement the following specific mitigations: 1) Avoid loading untrusted or unauthenticated model checkpoints or serialized data within Hugging Face Transformers environments. 2) Employ strict input validation and integrity checks (e.g., cryptographic signatures) on all checkpoint files before deserialization. 3) Use sandboxing or containerization to isolate AI model execution environments, limiting the impact of potential code execution. 4) Monitor user interactions that involve loading external model data and educate users on the risks of opening untrusted files or links. 5) Keep abreast of updates from Hugging Face and apply patches promptly once available. 6) Implement network-level controls to restrict access to internal AI infrastructure and detect anomalous behavior indicative of exploitation attempts. 7) Conduct regular security assessments and penetration testing focused on AI/ML pipelines to identify and remediate deserialization and related vulnerabilities.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
zdi
Date Reserved
2025-12-18T20:49:41.182Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 694b064e4eddf7475afca176

Added to database: 12/23/2025, 9:14:54 PM

Last enriched: 12/23/2025, 9:19:41 PM

Last updated: 12/26/2025, 5:15:24 PM

Views: 5

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats