CVE-2025-14928: CWE-94: Improper Control of Generation of Code ('Code Injection') in Hugging Face Transformers
Hugging Face Transformers HuBERT convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must convert a malicious checkpoint. The specific flaw exists within the convert_config function. The issue results from the lack of proper validation of a user-supplied string before using it to execute Python code. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-28253.
AI Analysis
Technical Summary
CVE-2025-14928 is a critical vulnerability identified in the Hugging Face Transformers library version 4.57.0, specifically within the convert_config function used for converting model checkpoint configurations. The root cause is improper control over code generation (CWE-94), where user-supplied strings are executed as Python code without adequate validation or sanitization. This allows an attacker who can supply a malicious checkpoint file to execute arbitrary Python code remotely in the context of the current user. Exploitation requires user interaction, meaning the victim must initiate the conversion of a malicious checkpoint. The vulnerability affects confidentiality, integrity, and availability by enabling attackers to run arbitrary commands, potentially leading to data theft, system compromise, or service disruption. The CVSS v3.0 score is 7.8, reflecting high severity due to the potential impact and ease of exploitation once the malicious input is processed. Although no public exploits are known yet, the widespread use of Hugging Face Transformers in AI and machine learning pipelines makes this a significant threat vector. The lack of patches at the time of disclosure necessitates immediate risk mitigation strategies. The vulnerability was assigned by ZDI (ZDI-CAN-28253) and publicly disclosed on December 23, 2025.
Potential Impact
For European organizations, the impact of CVE-2025-14928 can be substantial, especially those heavily invested in AI and machine learning workflows using Hugging Face Transformers. Successful exploitation could lead to unauthorized code execution, resulting in data breaches, intellectual property theft, or disruption of AI services. This could affect sectors such as finance, healthcare, automotive, and research institutions that rely on AI models for critical operations. The compromise of AI infrastructure could also undermine trust in AI-driven decision-making systems. Additionally, the vulnerability could be leveraged as a foothold for lateral movement within networks, escalating the severity of attacks. Given the high confidentiality, integrity, and availability impact, organizations face risks of regulatory penalties under GDPR if personal data is exposed. The requirement for user interaction limits mass exploitation but does not eliminate targeted attacks, especially in environments where model checkpoint conversions are routine.
Mitigation Recommendations
To mitigate CVE-2025-14928, European organizations should immediately audit their use of Hugging Face Transformers, particularly version 4.57.0, and avoid converting checkpoint files from untrusted or unauthenticated sources. Until an official patch is released, implement strict input validation and sanitization on any user-supplied data involved in model configuration conversions. Employ sandboxing or containerization techniques to isolate the conversion process, limiting the potential damage from arbitrary code execution. Monitor and restrict permissions of users and services performing model conversions to minimize privilege escalation risks. Incorporate security reviews into AI/ML pipeline workflows and educate developers and data scientists about the risks of processing untrusted model files. Stay alert for official patches or updates from Hugging Face and apply them promptly. Additionally, implement network segmentation and endpoint detection to identify suspicious activities related to model conversion processes.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Denmark
CVE-2025-14928: CWE-94: Improper Control of Generation of Code ('Code Injection') in Hugging Face Transformers
Description
Hugging Face Transformers HuBERT convert_config Code Injection Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must convert a malicious checkpoint. The specific flaw exists within the convert_config function. The issue results from the lack of proper validation of a user-supplied string before using it to execute Python code. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-28253.
AI-Powered Analysis
Technical Analysis
CVE-2025-14928 is a critical vulnerability identified in the Hugging Face Transformers library version 4.57.0, specifically within the convert_config function used for converting model checkpoint configurations. The root cause is improper control over code generation (CWE-94), where user-supplied strings are executed as Python code without adequate validation or sanitization. This allows an attacker who can supply a malicious checkpoint file to execute arbitrary Python code remotely in the context of the current user. Exploitation requires user interaction, meaning the victim must initiate the conversion of a malicious checkpoint. The vulnerability affects confidentiality, integrity, and availability by enabling attackers to run arbitrary commands, potentially leading to data theft, system compromise, or service disruption. The CVSS v3.0 score is 7.8, reflecting high severity due to the potential impact and ease of exploitation once the malicious input is processed. Although no public exploits are known yet, the widespread use of Hugging Face Transformers in AI and machine learning pipelines makes this a significant threat vector. The lack of patches at the time of disclosure necessitates immediate risk mitigation strategies. The vulnerability was assigned by ZDI (ZDI-CAN-28253) and publicly disclosed on December 23, 2025.
Potential Impact
For European organizations, the impact of CVE-2025-14928 can be substantial, especially those heavily invested in AI and machine learning workflows using Hugging Face Transformers. Successful exploitation could lead to unauthorized code execution, resulting in data breaches, intellectual property theft, or disruption of AI services. This could affect sectors such as finance, healthcare, automotive, and research institutions that rely on AI models for critical operations. The compromise of AI infrastructure could also undermine trust in AI-driven decision-making systems. Additionally, the vulnerability could be leveraged as a foothold for lateral movement within networks, escalating the severity of attacks. Given the high confidentiality, integrity, and availability impact, organizations face risks of regulatory penalties under GDPR if personal data is exposed. The requirement for user interaction limits mass exploitation but does not eliminate targeted attacks, especially in environments where model checkpoint conversions are routine.
Mitigation Recommendations
To mitigate CVE-2025-14928, European organizations should immediately audit their use of Hugging Face Transformers, particularly version 4.57.0, and avoid converting checkpoint files from untrusted or unauthenticated sources. Until an official patch is released, implement strict input validation and sanitization on any user-supplied data involved in model configuration conversions. Employ sandboxing or containerization techniques to isolate the conversion process, limiting the potential damage from arbitrary code execution. Monitor and restrict permissions of users and services performing model conversions to minimize privilege escalation risks. Incorporate security reviews into AI/ML pipeline workflows and educate developers and data scientists about the risks of processing untrusted model files. Stay alert for official patches or updates from Hugging Face and apply them promptly. Additionally, implement network segmentation and endpoint detection to identify suspicious activities related to model conversion processes.
Affected Countries
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- zdi
- Date Reserved
- 2025-12-18T20:49:58.765Z
- Cvss Version
- 3.0
- State
- PUBLISHED
Threat ID: 694b064e4eddf7475afca182
Added to database: 12/23/2025, 9:14:54 PM
Last enriched: 12/31/2025, 12:15:42 AM
Last updated: 2/4/2026, 5:26:42 PM
Views: 86
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
CVE-2026-25115: CWE-693: Protection Mechanism Failure in n8n-io n8n
CriticalCVE-2026-25056: CWE-434: Unrestricted Upload of File with Dangerous Type in n8n-io n8n
CriticalCVE-2026-25055: CWE-22: Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') in n8n-io n8n
HighCVE-2026-25054: CWE-80: Improper Neutralization of Script-Related HTML Tags in a Web Page (Basic XSS) in n8n-io n8n
HighCVE-2026-25053: CWE-78: Improper Neutralization of Special Elements used in an OS Command ('OS Command Injection') in n8n-io n8n
CriticalActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need more coverage?
Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.