Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-14930: CWE-502: Deserialization of Untrusted Data in Hugging Face Transformers

0
High
VulnerabilityCVE-2025-14930cvecve-2025-14930cwe-502
Published: Tue Dec 23 2025 (12/23/2025, 21:04:52 UTC)
Source: CVE Database V5
Vendor/Project: Hugging Face
Product: Transformers

Description

Hugging Face Transformers GLM4 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of weights. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-28309.

AI-Powered Analysis

AILast updated: 12/31/2025, 00:16:02 UTC

Technical Analysis

CVE-2025-14930 is a deserialization vulnerability classified under CWE-502 affecting Hugging Face Transformers version 4.57.1. The flaw exists in the parsing of model weights where user-supplied data is not properly validated before deserialization. This improper validation allows an attacker to craft malicious serialized data that, when processed by the vulnerable library, leads to arbitrary code execution within the context of the running process. Exploitation requires user interaction, such as opening a malicious file or visiting a malicious webpage that triggers the deserialization routine. The vulnerability is particularly dangerous because it can compromise confidentiality, integrity, and availability by allowing attackers to execute arbitrary commands, potentially leading to full system compromise. The CVSS 3.0 base score is 7.8, indicating high severity, with an attack vector classified as local (AV:L), low attack complexity (AC:L), no privileges required (PR:N), user interaction required (UI:R), and impacts on confidentiality, integrity, and availability all rated high (C:H/I:H/A:H). No public exploits are known at this time, but the vulnerability was reserved and published in December 2025. The issue is critical for organizations relying on Hugging Face Transformers for AI and machine learning workloads, especially where models are loaded from untrusted sources or user inputs. The lack of a patch link suggests that a fix may still be pending or recently released, so immediate attention is required to monitor for updates.

Potential Impact

For European organizations, the impact of CVE-2025-14930 is significant, particularly those integrating Hugging Face Transformers in AI-driven applications, research, or production environments. Successful exploitation can lead to remote code execution, allowing attackers to compromise sensitive data, manipulate AI model outputs, or disrupt services. This can affect sectors such as finance, healthcare, automotive, and government agencies that increasingly rely on AI technologies. The requirement for user interaction means phishing or social engineering could be used to deliver the payload, increasing the risk in environments with less stringent user awareness. Additionally, compromised AI models could lead to erroneous decisions or data leakage, undermining trust and compliance with GDPR and other data protection regulations. The vulnerability also poses risks to cloud-based AI services hosted in Europe, potentially impacting availability and integrity of AI workloads.

Mitigation Recommendations

Organizations should immediately audit their use of Hugging Face Transformers, specifically version 4.57.1, and plan to upgrade to a patched version once available. Until a patch is released, restrict the loading of model weights from untrusted or unauthenticated sources. Implement strict input validation and sanitization for any data used in model weight parsing. Employ sandboxing or containerization to isolate AI workloads, limiting the potential impact of code execution. Enhance user awareness training to reduce the risk of social engineering attacks that could trigger exploitation. Monitor network and system logs for unusual activity related to AI model loading or deserialization processes. Consider deploying application-layer controls to detect and block malicious serialized payloads. Finally, maintain up-to-date backups and incident response plans tailored to AI infrastructure compromise scenarios.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
zdi
Date Reserved
2025-12-18T20:50:05.828Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 694b064e4eddf7475afca188

Added to database: 12/23/2025, 9:14:54 PM

Last enriched: 12/31/2025, 12:16:02 AM

Last updated: 2/4/2026, 7:32:31 PM

Views: 63

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats