Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-14926: CWE-94: Improper Control of Generation of Code ('Code Injection') in Hugging Face Transformers

0
High
VulnerabilityCVE-2025-14926cvecve-2025-14926cwe-94
Published: Tue Dec 23 2025 (12/23/2025, 21:04:32 UTC)
Source: CVE Database V5
Vendor/Project: Hugging Face
Product: Transformers

Description

Hugging Face Transformers SEW convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must convert a malicious checkpoint. The specific flaw exists within the convert_config function. The issue results from the lack of proper validation of a user-supplied string before using it to execute Python code. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-28251.

AI-Powered Analysis

AILast updated: 12/23/2025, 21:19:15 UTC

Technical Analysis

CVE-2025-14926 is a remote code execution vulnerability classified under CWE-94 (Improper Control of Generation of Code) affecting Hugging Face Transformers version 4.57.0. The vulnerability exists in the convert_config function, which processes user-supplied strings without adequate validation before executing them as Python code. This lack of sanitization allows an attacker to craft a malicious checkpoint file that, when converted by the vulnerable function, triggers arbitrary code execution with the privileges of the current user. Exploitation requires user interaction, specifically the target must initiate the conversion of the malicious checkpoint, which could occur in environments where users import or update machine learning models from external sources. The vulnerability impacts confidentiality, integrity, and availability by allowing attackers to run arbitrary commands, potentially leading to data theft, system compromise, or denial of service. Although no public exploits are known at this time, the high CVSS score of 7.8 indicates significant risk. The vulnerability was identified and published by ZDI (ZDI-CAN-28251) on December 23, 2025. The affected product, Hugging Face Transformers, is widely used in natural language processing and AI applications, making this vulnerability relevant to organizations relying on these technologies.

Potential Impact

For European organizations, this vulnerability poses a substantial risk, particularly for those integrating Hugging Face Transformers into their AI and machine learning pipelines. Successful exploitation could lead to unauthorized code execution, resulting in data breaches, intellectual property theft, or disruption of AI services. Organizations handling sensitive data or operating critical infrastructure that leverages AI models are especially vulnerable. The requirement for user interaction limits mass exploitation but does not eliminate risk, as attackers could employ social engineering or supply chain attacks to trick users into converting malicious checkpoints. The impact extends to cloud environments and on-premises deployments where the vulnerable library is used. Given the growing adoption of AI technologies in Europe, this vulnerability could affect sectors such as finance, healthcare, automotive, and government services, potentially undermining trust and operational continuity.

Mitigation Recommendations

1. Avoid converting or loading checkpoint files from untrusted or unauthenticated sources until a patch is released. 2. Monitor Hugging Face official channels and apply security patches or updates promptly once available. 3. Implement strict input validation and sanitization for any user-supplied data processed by the convert_config function or related workflows. 4. Employ sandboxing or containerization to isolate the execution environment of model conversion processes, limiting the impact of potential code execution. 5. Educate users and developers about the risks of processing untrusted model files and enforce policies to verify the integrity and provenance of AI models before use. 6. Use runtime application self-protection (RASP) or endpoint detection and response (EDR) tools to detect anomalous behaviors indicative of exploitation attempts. 7. Review and restrict permissions of the user accounts running the Transformers library to minimize potential damage from exploitation.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
zdi
Date Reserved
2025-12-18T20:49:50.656Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 694b064e4eddf7475afca17c

Added to database: 12/23/2025, 9:14:54 PM

Last enriched: 12/23/2025, 9:19:15 PM

Last updated: 12/26/2025, 7:10:09 PM

Views: 6

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats