CVE-2025-14930: CWE-502: Deserialization of Untrusted Data in Hugging Face Transformers
Hugging Face Transformers GLM4 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of weights. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-28309.
AI Analysis
Technical Summary
CVE-2025-14930 is a deserialization vulnerability classified under CWE-502 affecting Hugging Face Transformers version 4.57.1. The flaw exists in the parsing of model weights where user-supplied data is not properly validated before deserialization. This improper validation allows an attacker to craft malicious serialized data that, when processed by the vulnerable library, leads to arbitrary code execution within the context of the running process. Exploitation requires user interaction, such as opening a malicious file or visiting a malicious webpage that triggers the deserialization routine. The vulnerability is particularly dangerous because it can compromise confidentiality, integrity, and availability by allowing attackers to execute arbitrary commands, potentially leading to full system compromise. The CVSS 3.0 base score is 7.8, indicating high severity, with an attack vector classified as local (AV:L), low attack complexity (AC:L), no privileges required (PR:N), user interaction required (UI:R), and impacts on confidentiality, integrity, and availability all rated high (C:H/I:H/A:H). No public exploits are known at this time, but the vulnerability was reserved and published in December 2025. The issue is critical for organizations relying on Hugging Face Transformers for AI and machine learning workloads, especially where models are loaded from untrusted sources or user inputs. The lack of a patch link suggests that a fix may still be pending or recently released, so immediate attention is required to monitor for updates.
Potential Impact
For European organizations, the impact of CVE-2025-14930 is significant, particularly those integrating Hugging Face Transformers in AI-driven applications, research, or production environments. Successful exploitation can lead to remote code execution, allowing attackers to compromise sensitive data, manipulate AI model outputs, or disrupt services. This can affect sectors such as finance, healthcare, automotive, and government agencies that increasingly rely on AI technologies. The requirement for user interaction means phishing or social engineering could be used to deliver the payload, increasing the risk in environments with less stringent user awareness. Additionally, compromised AI models could lead to erroneous decisions or data leakage, undermining trust and compliance with GDPR and other data protection regulations. The vulnerability also poses risks to cloud-based AI services hosted in Europe, potentially impacting availability and integrity of AI workloads.
Mitigation Recommendations
Organizations should immediately audit their use of Hugging Face Transformers, specifically version 4.57.1, and plan to upgrade to a patched version once available. Until a patch is released, restrict the loading of model weights from untrusted or unauthenticated sources. Implement strict input validation and sanitization for any data used in model weight parsing. Employ sandboxing or containerization to isolate AI workloads, limiting the potential impact of code execution. Enhance user awareness training to reduce the risk of social engineering attacks that could trigger exploitation. Monitor network and system logs for unusual activity related to AI model loading or deserialization processes. Consider deploying application-layer controls to detect and block malicious serialized payloads. Finally, maintain up-to-date backups and incident response plans tailored to AI infrastructure compromise scenarios.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland
CVE-2025-14930: CWE-502: Deserialization of Untrusted Data in Hugging Face Transformers
Description
Hugging Face Transformers GLM4 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of weights. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-28309.
AI-Powered Analysis
Technical Analysis
CVE-2025-14930 is a deserialization vulnerability classified under CWE-502 affecting Hugging Face Transformers version 4.57.1. The flaw exists in the parsing of model weights where user-supplied data is not properly validated before deserialization. This improper validation allows an attacker to craft malicious serialized data that, when processed by the vulnerable library, leads to arbitrary code execution within the context of the running process. Exploitation requires user interaction, such as opening a malicious file or visiting a malicious webpage that triggers the deserialization routine. The vulnerability is particularly dangerous because it can compromise confidentiality, integrity, and availability by allowing attackers to execute arbitrary commands, potentially leading to full system compromise. The CVSS 3.0 base score is 7.8, indicating high severity, with an attack vector classified as local (AV:L), low attack complexity (AC:L), no privileges required (PR:N), user interaction required (UI:R), and impacts on confidentiality, integrity, and availability all rated high (C:H/I:H/A:H). No public exploits are known at this time, but the vulnerability was reserved and published in December 2025. The issue is critical for organizations relying on Hugging Face Transformers for AI and machine learning workloads, especially where models are loaded from untrusted sources or user inputs. The lack of a patch link suggests that a fix may still be pending or recently released, so immediate attention is required to monitor for updates.
Potential Impact
For European organizations, the impact of CVE-2025-14930 is significant, particularly those integrating Hugging Face Transformers in AI-driven applications, research, or production environments. Successful exploitation can lead to remote code execution, allowing attackers to compromise sensitive data, manipulate AI model outputs, or disrupt services. This can affect sectors such as finance, healthcare, automotive, and government agencies that increasingly rely on AI technologies. The requirement for user interaction means phishing or social engineering could be used to deliver the payload, increasing the risk in environments with less stringent user awareness. Additionally, compromised AI models could lead to erroneous decisions or data leakage, undermining trust and compliance with GDPR and other data protection regulations. The vulnerability also poses risks to cloud-based AI services hosted in Europe, potentially impacting availability and integrity of AI workloads.
Mitigation Recommendations
Organizations should immediately audit their use of Hugging Face Transformers, specifically version 4.57.1, and plan to upgrade to a patched version once available. Until a patch is released, restrict the loading of model weights from untrusted or unauthenticated sources. Implement strict input validation and sanitization for any data used in model weight parsing. Employ sandboxing or containerization to isolate AI workloads, limiting the potential impact of code execution. Enhance user awareness training to reduce the risk of social engineering attacks that could trigger exploitation. Monitor network and system logs for unusual activity related to AI model loading or deserialization processes. Consider deploying application-layer controls to detect and block malicious serialized payloads. Finally, maintain up-to-date backups and incident response plans tailored to AI infrastructure compromise scenarios.
Affected Countries
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- zdi
- Date Reserved
- 2025-12-18T20:50:05.828Z
- Cvss Version
- 3.0
- State
- PUBLISHED
Threat ID: 694b064e4eddf7475afca188
Added to database: 12/23/2025, 9:14:54 PM
Last enriched: 12/31/2025, 12:16:02 AM
Last updated: 2/7/2026, 10:45:42 AM
Views: 64
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
CVE-2026-2083: SQL Injection in code-projects Social Networking Site
MediumCVE-2026-2082: OS Command Injection in D-Link DIR-823X
MediumCVE-2026-2080: Command Injection in UTT HiPER 810
HighCVE-2026-2079: Improper Authorization in yeqifu warehouse
MediumCVE-2026-1675: CWE-1188 Initialization of a Resource with an Insecure Default in brstefanovic Advanced Country Blocker
MediumActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need more coverage?
Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.