Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-14922: CWE-502: Deserialization of Untrusted Data in Hugging Face Diffusers

0
High
VulnerabilityCVE-2025-14922cvecve-2025-14922cwe-502
Published: Tue Dec 23 2025 (12/23/2025, 21:05:03 UTC)
Source: CVE Database V5
Vendor/Project: Hugging Face
Product: Diffusers

Description

Hugging Face Diffusers CogView4 Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Diffusers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of checkpoints. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current process. Was ZDI-CAN-27424.

AI-Powered Analysis

AILast updated: 12/23/2025, 21:19:59 UTC

Technical Analysis

CVE-2025-14922 is a deserialization vulnerability classified under CWE-502 found in the Hugging Face Diffusers library, specifically impacting the CogView4 model's checkpoint parsing mechanism. The vulnerability stems from the library's failure to properly validate user-supplied data during the deserialization process of model checkpoints. Deserialization vulnerabilities occur when untrusted data is parsed and converted back into executable code or objects without sufficient validation, allowing attackers to inject malicious payloads. In this case, an attacker can craft a malicious checkpoint file or lure a user into visiting a malicious webpage that triggers the loading of such a checkpoint. Upon deserialization, arbitrary code can be executed with the privileges of the running process, potentially leading to full system compromise. The attack vector requires user interaction (opening a malicious file or visiting a malicious URL) but does not require prior authentication or elevated privileges. The CVSS v3.0 score of 7.8 reflects high severity, with high impact on confidentiality, integrity, and availability, and low attack complexity. No patches or exploit code are currently publicly available, but the vulnerability is publicly disclosed and should be considered a significant risk for organizations using this library in their AI/ML pipelines. The vulnerability was assigned and published by ZDI (Zero Day Initiative) under the identifier ZDI-CAN-27424.

Potential Impact

For European organizations, the impact of CVE-2025-14922 can be substantial, particularly for those leveraging Hugging Face Diffusers in AI research, development, or production environments. Successful exploitation can lead to remote code execution, allowing attackers to gain control over affected systems, exfiltrate sensitive data, manipulate AI models, or disrupt services. This can compromise intellectual property, violate data protection regulations such as GDPR, and damage organizational reputation. Since the vulnerability requires user interaction, phishing or social engineering campaigns could be used to trigger exploitation. Organizations integrating Diffusers into web applications, cloud services, or internal tools are at heightened risk. The potential for lateral movement and persistence within networks increases the threat's severity. Additionally, AI model integrity could be compromised, affecting downstream applications relying on these models. The lack of known exploits in the wild currently reduces immediate risk but does not diminish the need for proactive mitigation.

Mitigation Recommendations

1. Avoid loading or parsing checkpoint files from untrusted or unknown sources until a patch is available. 2. Monitor official Hugging Face channels and repositories for security updates or patches addressing this vulnerability and apply them promptly. 3. Implement strict input validation and sanitization for any user-supplied data related to model checkpoints. 4. Use sandboxing or containerization to isolate processes that handle model loading to limit the impact of potential exploitation. 5. Educate users and developers about the risks of opening untrusted files or clicking unknown links, emphasizing the need for caution with AI model files. 6. Employ network-level protections such as web filtering and email security to reduce the risk of malicious payload delivery. 7. Conduct regular security assessments and code reviews of AI/ML pipelines to identify and remediate similar deserialization risks. 8. Consider disabling or restricting features that automatically load external checkpoints or models where feasible.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
zdi
Date Reserved
2025-12-18T20:43:28.842Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 694b064e4eddf7475afca173

Added to database: 12/23/2025, 9:14:54 PM

Last enriched: 12/23/2025, 9:19:59 PM

Last updated: 12/26/2025, 7:19:11 PM

Views: 10

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats