Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-14926: CWE-94: Improper Control of Generation of Code ('Code Injection') in Hugging Face Transformers

0
High
VulnerabilityCVE-2025-14926cvecve-2025-14926cwe-94
Published: Tue Dec 23 2025 (12/23/2025, 21:04:32 UTC)
Source: CVE Database V5
Vendor/Project: Hugging Face
Product: Transformers

Description

CVE-2025-14926 is a high-severity code injection vulnerability in Hugging Face Transformers version 4. 57. 0, specifically in the convert_config function. It allows remote attackers to execute arbitrary Python code by supplying a malicious checkpoint configuration. Exploitation requires user interaction, as the target must convert a crafted checkpoint file. The vulnerability arises from improper validation of user-supplied strings before execution, enabling code execution with the privileges of the current user. Although no known exploits are currently in the wild, the impact on confidentiality, integrity, and availability is high. This vulnerability poses a significant risk to organizations using Hugging Face Transformers for machine learning model management or deployment. European organizations relying on this library, especially in AI research and development sectors, should prioritize patching or mitigating this issue. Countries with strong AI and tech industries, such as Germany, France, the UK, and the Netherlands, are likely to be most affected.

AI-Powered Analysis

AILast updated: 12/31/2025, 00:21:51 UTC

Technical Analysis

CVE-2025-14926 is a remote code execution vulnerability classified under CWE-94 (Improper Control of Generation of Code) found in the Hugging Face Transformers library, version 4.57.0. The flaw exists in the convert_config function, which processes user-supplied configuration strings without adequate validation before executing them as Python code. This lack of sanitization allows an attacker to craft a malicious checkpoint configuration that, when converted by the vulnerable function, executes arbitrary code in the context of the current user. Exploitation requires user interaction, specifically the victim must initiate the conversion of the malicious checkpoint. The vulnerability has a CVSS 3.0 base score of 7.8, indicating high severity, with attack vector local (AV:L), low attack complexity (AC:L), no privileges required (PR:N), and user interaction required (UI:R). The impact on confidentiality, integrity, and availability is high, as arbitrary code execution can lead to data theft, system compromise, or denial of service. No patches or exploit code are currently publicly available, but the vulnerability was assigned and published by ZDI (ZDI-CAN-28251). This vulnerability is particularly critical for environments where Hugging Face Transformers is used to manage or deploy machine learning models, as malicious checkpoints could be introduced through supply chain attacks or insider threats.

Potential Impact

For European organizations, the impact of CVE-2025-14926 is significant due to the widespread adoption of Hugging Face Transformers in AI research, development, and deployment. Successful exploitation could lead to unauthorized access to sensitive data, manipulation or destruction of machine learning models, and potential lateral movement within networks. This could disrupt AI-driven services, compromise intellectual property, and damage organizational reputation. Sectors such as finance, healthcare, automotive, and government, which increasingly rely on AI technologies, are particularly at risk. The requirement for user interaction limits mass exploitation but does not eliminate targeted attacks, especially in environments where model conversion is routine. The absence of known exploits in the wild provides a window for proactive mitigation, but the high severity score underscores the urgency of addressing this vulnerability.

Mitigation Recommendations

1. Avoid converting or loading checkpoint files from untrusted or unknown sources to prevent triggering the vulnerability. 2. Implement strict input validation and sanitization for any user-supplied configuration data before processing. 3. Use sandboxing or containerization techniques to isolate the execution environment of model conversion processes, limiting potential damage from code execution. 4. Monitor and audit logs for unusual activities related to model conversion or checkpoint processing. 5. Stay updated with Hugging Face releases and apply patches promptly once available. 6. Educate users and developers about the risks of processing untrusted model files and enforce policies restricting such actions. 7. Consider employing application whitelisting or endpoint protection solutions that can detect and block suspicious code execution patterns related to this vulnerability.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
zdi
Date Reserved
2025-12-18T20:49:50.656Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 694b064e4eddf7475afca17c

Added to database: 12/23/2025, 9:14:54 PM

Last enriched: 12/31/2025, 12:21:51 AM

Last updated: 2/7/2026, 5:38:25 AM

Views: 99

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats