Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-14924: CWE-502: Deserialization of Untrusted Data in Hugging Face Transformers

0
High
VulnerabilityCVE-2025-14924cvecve-2025-14924cwe-502
Published: Tue Dec 23 2025 (12/23/2025, 21:04:40 UTC)
Source: CVE Database V5
Vendor/Project: Hugging Face
Product: Transformers

Description

Hugging Face Transformers megatron_gpt2 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of checkpoints. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-27984.

AI-Powered Analysis

AILast updated: 12/31/2025, 00:16:55 UTC

Technical Analysis

CVE-2025-14924 is a remote code execution vulnerability classified under CWE-502 (Deserialization of Untrusted Data) found in the Hugging Face Transformers library, specifically in the megatron_gpt2 model checkpoint parsing functionality. The vulnerability stems from the library's failure to properly validate user-supplied data when deserializing model checkpoints, which can be crafted maliciously to execute arbitrary code within the context of the running process. An attacker must convince a user to interact with a malicious checkpoint file or visit a malicious page that triggers the deserialization process. The vulnerability does not require any privileges or authentication but does require user interaction, making social engineering a likely attack vector. The CVSS v3.0 score is 7.8, indicating high severity, with impacts on confidentiality, integrity, and availability. The attack vector is local (AV:L), meaning the attacker needs to have local access or the user must perform an action locally, but the attack complexity is low and no privileges are required. No patches have been linked yet, and no known exploits are reported in the wild as of the publication date. This vulnerability is critical for environments where Hugging Face Transformers are used to load or parse model checkpoints, especially in AI development and deployment pipelines where untrusted data sources may be present.

Potential Impact

For European organizations, this vulnerability poses a significant risk, especially those involved in AI research, development, and deployment using Hugging Face Transformers. Successful exploitation could lead to arbitrary code execution, resulting in full system compromise, data theft, or disruption of AI services. Confidentiality of sensitive data processed by AI models could be breached, integrity of AI workflows compromised, and availability of AI services disrupted. Industries such as finance, healthcare, automotive, and government agencies leveraging AI technologies are particularly vulnerable. The requirement for user interaction means phishing or social engineering campaigns could be used to trigger exploitation. Given the increasing adoption of AI frameworks in Europe, the vulnerability could be leveraged to target intellectual property or critical infrastructure. The lack of known exploits currently provides a window for proactive mitigation, but the high severity score underscores the urgency of addressing this issue.

Mitigation Recommendations

1. Avoid loading or parsing model checkpoints from untrusted or unauthenticated sources. 2. Implement strict input validation and sanitization for any checkpoint files before deserialization. 3. Use sandboxing or containerization to isolate the execution environment of AI model loading processes to limit potential damage from exploitation. 4. Monitor and restrict user interactions that involve opening files or visiting URLs that could trigger deserialization. 5. Employ endpoint protection solutions capable of detecting anomalous behavior related to code execution in AI frameworks. 6. Stay updated with Hugging Face releases and apply patches promptly once available. 7. Educate users on the risks of opening untrusted files or links related to AI models. 8. Consider disabling or restricting the use of megatron_gpt2 or other vulnerable components until a patch is released. 9. Conduct regular security assessments of AI pipelines to identify and remediate deserialization risks. 10. Implement network segmentation to limit lateral movement if exploitation occurs.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
zdi
Date Reserved
2025-12-18T20:49:41.182Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 694b064e4eddf7475afca176

Added to database: 12/23/2025, 9:14:54 PM

Last enriched: 12/31/2025, 12:16:55 AM

Last updated: 2/2/2026, 11:32:04 AM

Views: 42

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats