CVE-2025-14927: CWE-94: Improper Control of Generation of Code ('Code Injection') in Hugging Face Transformers
CVE-2025-14927 is a high-severity remote code execution vulnerability in Hugging Face Transformers version 4. 57. 0, specifically in the convert_config function. It arises from improper validation of user-supplied strings that are executed as Python code, enabling attackers to execute arbitrary code with the privileges of the current user. Exploitation requires user interaction, specifically converting a malicious checkpoint file. The vulnerability impacts confidentiality, integrity, and availability, with a CVSS score of 7. 8. There are no known exploits in the wild yet, but the risk is significant due to the widespread use of Hugging Face Transformers in AI and ML workflows. European organizations using this library in development or production environments are at risk, especially those in countries with strong AI sectors. Mitigation involves avoiding untrusted checkpoint files, applying patches once available, and implementing strict input validation and sandboxing.
AI Analysis
Technical Summary
CVE-2025-14927 is a critical vulnerability identified in the Hugging Face Transformers library, version 4.57.0, specifically within the convert_config function. This function improperly handles user-supplied strings by executing them as Python code without adequate validation, leading to a code injection flaw classified under CWE-94. An attacker who can trick a user into converting a maliciously crafted checkpoint file can execute arbitrary code remotely with the privileges of the current user. The vulnerability requires user interaction, meaning the target must perform the conversion operation on the malicious checkpoint. The flaw impacts confidentiality, integrity, and availability because arbitrary code execution can lead to data theft, system compromise, or denial of service. The CVSS 3.0 base score of 7.8 reflects a high severity, with attack vector local (AV:L), low attack complexity (AC:L), no privileges required (PR:N), user interaction required (UI:R), unchanged scope (S:U), and high impacts on confidentiality, integrity, and availability (C:H/I:H/A:H). While no public exploits are currently known, the vulnerability poses a significant risk given the popularity of Hugging Face Transformers in AI/ML pipelines. The lack of a patch at the time of publication necessitates immediate risk mitigation strategies. This vulnerability was initially reported as ZDI-CAN-28252 and is now publicly disclosed.
Potential Impact
For European organizations, the impact of CVE-2025-14927 can be substantial, especially those engaged in AI research, development, and deployment using Hugging Face Transformers. Successful exploitation can lead to unauthorized code execution, enabling attackers to access sensitive data, manipulate machine learning models, disrupt AI services, or pivot within networks. This can compromise intellectual property, violate data protection regulations such as GDPR, and cause operational downtime. Organizations relying on automated ML workflows or integrating third-party checkpoints are particularly vulnerable. The requirement for user interaction limits mass exploitation but does not eliminate risk, as social engineering or supply chain attacks could facilitate exploitation. The high confidentiality and integrity impact also raise concerns for sectors handling sensitive or regulated data, including finance, healthcare, and government. Additionally, compromised AI models could produce erroneous outputs, undermining trust and decision-making processes.
Mitigation Recommendations
To mitigate CVE-2025-14927, European organizations should: 1) Immediately avoid converting or loading checkpoint files from untrusted or unauthenticated sources to prevent malicious input. 2) Monitor for updates from Hugging Face and apply security patches promptly once released. 3) Implement strict input validation and sanitization around any user-supplied configuration or checkpoint data before processing. 4) Employ sandboxing or containerization techniques to isolate the execution environment of the convert_config function, limiting potential damage from code execution. 5) Educate users and developers about the risks of processing untrusted checkpoints and enforce policies restricting such actions. 6) Integrate runtime monitoring and anomaly detection to identify suspicious activities during model conversion or execution. 7) Review and harden access controls to minimize privileges of users performing model conversions, reducing the impact of potential exploitation.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland
CVE-2025-14927: CWE-94: Improper Control of Generation of Code ('Code Injection') in Hugging Face Transformers
Description
CVE-2025-14927 is a high-severity remote code execution vulnerability in Hugging Face Transformers version 4. 57. 0, specifically in the convert_config function. It arises from improper validation of user-supplied strings that are executed as Python code, enabling attackers to execute arbitrary code with the privileges of the current user. Exploitation requires user interaction, specifically converting a malicious checkpoint file. The vulnerability impacts confidentiality, integrity, and availability, with a CVSS score of 7. 8. There are no known exploits in the wild yet, but the risk is significant due to the widespread use of Hugging Face Transformers in AI and ML workflows. European organizations using this library in development or production environments are at risk, especially those in countries with strong AI sectors. Mitigation involves avoiding untrusted checkpoint files, applying patches once available, and implementing strict input validation and sandboxing.
AI-Powered Analysis
Technical Analysis
CVE-2025-14927 is a critical vulnerability identified in the Hugging Face Transformers library, version 4.57.0, specifically within the convert_config function. This function improperly handles user-supplied strings by executing them as Python code without adequate validation, leading to a code injection flaw classified under CWE-94. An attacker who can trick a user into converting a maliciously crafted checkpoint file can execute arbitrary code remotely with the privileges of the current user. The vulnerability requires user interaction, meaning the target must perform the conversion operation on the malicious checkpoint. The flaw impacts confidentiality, integrity, and availability because arbitrary code execution can lead to data theft, system compromise, or denial of service. The CVSS 3.0 base score of 7.8 reflects a high severity, with attack vector local (AV:L), low attack complexity (AC:L), no privileges required (PR:N), user interaction required (UI:R), unchanged scope (S:U), and high impacts on confidentiality, integrity, and availability (C:H/I:H/A:H). While no public exploits are currently known, the vulnerability poses a significant risk given the popularity of Hugging Face Transformers in AI/ML pipelines. The lack of a patch at the time of publication necessitates immediate risk mitigation strategies. This vulnerability was initially reported as ZDI-CAN-28252 and is now publicly disclosed.
Potential Impact
For European organizations, the impact of CVE-2025-14927 can be substantial, especially those engaged in AI research, development, and deployment using Hugging Face Transformers. Successful exploitation can lead to unauthorized code execution, enabling attackers to access sensitive data, manipulate machine learning models, disrupt AI services, or pivot within networks. This can compromise intellectual property, violate data protection regulations such as GDPR, and cause operational downtime. Organizations relying on automated ML workflows or integrating third-party checkpoints are particularly vulnerable. The requirement for user interaction limits mass exploitation but does not eliminate risk, as social engineering or supply chain attacks could facilitate exploitation. The high confidentiality and integrity impact also raise concerns for sectors handling sensitive or regulated data, including finance, healthcare, and government. Additionally, compromised AI models could produce erroneous outputs, undermining trust and decision-making processes.
Mitigation Recommendations
To mitigate CVE-2025-14927, European organizations should: 1) Immediately avoid converting or loading checkpoint files from untrusted or unauthenticated sources to prevent malicious input. 2) Monitor for updates from Hugging Face and apply security patches promptly once released. 3) Implement strict input validation and sanitization around any user-supplied configuration or checkpoint data before processing. 4) Employ sandboxing or containerization techniques to isolate the execution environment of the convert_config function, limiting potential damage from code execution. 5) Educate users and developers about the risks of processing untrusted checkpoints and enforce policies restricting such actions. 6) Integrate runtime monitoring and anomaly detection to identify suspicious activities during model conversion or execution. 7) Review and harden access controls to minimize privileges of users performing model conversions, reducing the impact of potential exploitation.
Affected Countries
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- zdi
- Date Reserved
- 2025-12-18T20:49:54.276Z
- Cvss Version
- 3.0
- State
- PUBLISHED
Threat ID: 694b064e4eddf7475afca17f
Added to database: 12/23/2025, 9:14:54 PM
Last enriched: 12/31/2025, 12:22:07 AM
Last updated: 2/7/2026, 2:46:54 PM
Views: 48
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
CVE-2026-2088: SQL Injection in PHPGurukul Beauty Parlour Management System
MediumCVE-2026-2087: SQL Injection in SourceCodester Online Class Record System
MediumCVE-2026-2086: Buffer Overflow in UTT HiPER 810G
HighCVE-2026-2085: Command Injection in D-Link DWR-M921
HighCVE-2026-2084: OS Command Injection in D-Link DIR-823X
HighActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need more coverage?
Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.