Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-14922: CWE-502: Deserialization of Untrusted Data in Hugging Face Diffusers

0
High
VulnerabilityCVE-2025-14922cvecve-2025-14922cwe-502
Published: Tue Dec 23 2025 (12/23/2025, 21:05:03 UTC)
Source: CVE Database V5
Vendor/Project: Hugging Face
Product: Diffusers

Description

Hugging Face Diffusers CogView4 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Diffusers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of checkpoints. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-27424.

AI-Powered Analysis

AILast updated: 12/31/2025, 00:13:43 UTC

Technical Analysis

CVE-2025-14922 is a deserialization vulnerability classified under CWE-502 affecting the Hugging Face Diffusers library, specifically the CogView4 model component. The flaw exists in the parsing mechanism of checkpoint files, where user-supplied data is not properly validated before deserialization. This improper validation allows an attacker to craft malicious checkpoint files that, when loaded by the Diffusers library, trigger deserialization of untrusted data. This leads to remote code execution (RCE) within the context of the current process running the vulnerable software. Exploitation requires user interaction, such as opening a malicious file or visiting a malicious webpage that causes the vulnerable code path to execute. The vulnerability has a CVSS v3.0 score of 7.8, indicating high severity, with attack vector local (AV:L), low attack complexity (AC:L), no privileges required (PR:N), user interaction required (UI:R), unchanged scope (S:U), and high impact on confidentiality, integrity, and availability (C:H/I:H/A:H). No public exploits are known at this time, but the potential for serious compromise exists, especially in environments where Hugging Face Diffusers is used for AI model deployment or development. The vulnerability was assigned by ZDI (ZDI-CAN-27424) and published on December 23, 2025. The affected version is identified by a specific commit hash, indicating a particular build of the software. The root cause is the unsafe deserialization of checkpoint data, a common vector for RCE in software that processes serialized objects without strict validation or sandboxing. This vulnerability highlights the risks of integrating AI/ML libraries without rigorous input validation and security controls.

Potential Impact

For European organizations, the impact of CVE-2025-14922 can be significant, particularly for those leveraging Hugging Face Diffusers in AI research, development, or production environments. Successful exploitation could lead to full system compromise, data theft, or disruption of AI services, affecting confidentiality, integrity, and availability of critical systems. Organizations handling sensitive data or intellectual property in AI models are at risk of espionage or sabotage. The requirement for user interaction limits mass exploitation but targeted attacks against AI teams or data scientists remain a concern. The vulnerability could also be leveraged as a pivot point for lateral movement within networks, especially in environments where AI tools are integrated with broader IT infrastructure. Given the growing adoption of AI technologies across European industries, including finance, healthcare, and manufacturing, this vulnerability poses a tangible threat to operational continuity and data security.

Mitigation Recommendations

1. Avoid loading checkpoint files from untrusted or unauthenticated sources to prevent malicious payloads. 2. Monitor Hugging Face’s official channels for patches or updates addressing this vulnerability and apply them promptly once available. 3. Implement strict input validation and sanitization for all data processed by AI/ML pipelines, especially serialized objects like checkpoints. 4. Use sandboxing or containerization to isolate AI model execution environments, limiting the impact of potential code execution. 5. Educate users and developers about the risks of opening untrusted files or visiting suspicious links related to AI tools. 6. Employ endpoint detection and response (EDR) solutions to detect anomalous behaviors indicative of exploitation attempts. 7. Review and restrict permissions of processes running AI workloads to minimize the scope of potential compromise. 8. Conduct regular security assessments and code reviews focusing on deserialization and input handling in AI frameworks.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
zdi
Date Reserved
2025-12-18T20:43:28.842Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 694b064e4eddf7475afca173

Added to database: 12/23/2025, 9:14:54 PM

Last enriched: 12/31/2025, 12:13:43 AM

Last updated: 2/6/2026, 5:22:08 AM

Views: 36

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats