CVE-2025-14920: CWE-502: Deserialization of Untrusted Data in Hugging Face Transformers
Hugging Face Transformers Perceiver Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25423.
AI Analysis
Technical Summary
CVE-2025-14920 is a deserialization vulnerability classified under CWE-502 found in the Hugging Face Transformers library, specifically within the Perceiver model's handling of model files. The vulnerability stems from the lack of proper validation of user-supplied data during the deserialization process, which is a common vector for remote code execution (RCE) attacks. When a maliciously crafted model file is parsed, it can trigger the execution of arbitrary code with the privileges of the user running the application. The attack requires user interaction, such as opening a malicious file or visiting a malicious webpage that triggers the loading of the compromised model. The vulnerability has a CVSS 3.0 base score of 7.8, reflecting high impact on confidentiality, integrity, and availability, with low attack complexity and no privileges required, but user interaction is necessary. This flaw is particularly critical in environments where Hugging Face Transformers are used to load models from untrusted or external sources, as it can lead to full system compromise. No public patches are currently listed, and no known exploits have been observed in the wild, but the risk remains significant due to the widespread use of Hugging Face in AI/ML applications.
Potential Impact
For European organizations, the impact of CVE-2025-14920 can be substantial, especially for those leveraging AI and machine learning frameworks in production or research environments. Successful exploitation could lead to unauthorized code execution, data theft, manipulation of AI model outputs, or disruption of AI-driven services. This could compromise sensitive data, intellectual property, and critical decision-making processes reliant on AI models. Industries such as finance, healthcare, automotive, and telecommunications, which increasingly integrate AI technologies, may face operational disruptions and reputational damage. Additionally, organizations using Hugging Face Transformers in cloud or hybrid environments could see lateral movement by attackers if the vulnerability is exploited. The requirement for user interaction somewhat limits automated mass exploitation but does not eliminate targeted attacks, especially via phishing or supply chain vectors. The absence of known exploits currently provides a window for proactive mitigation.
Mitigation Recommendations
European organizations should implement strict controls around the sourcing and validation of AI model files, ensuring they come from trusted and verified repositories. Employ sandboxing or containerization techniques when loading models to isolate potential malicious code execution. Regularly update Hugging Face Transformers to the latest versions once patches are released. Implement network-level protections to detect and block suspicious traffic related to model file downloads or interactions. Educate users about the risks of opening untrusted files or clicking unknown links that could trigger model loading. Employ application whitelisting and endpoint detection and response (EDR) solutions to monitor for anomalous behavior indicative of exploitation attempts. Consider integrating integrity checks or cryptographic signatures for model files to prevent tampering. Finally, maintain an incident response plan tailored to AI/ML infrastructure compromises.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland
CVE-2025-14920: CWE-502: Deserialization of Untrusted Data in Hugging Face Transformers
Description
Hugging Face Transformers Perceiver Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25423.
AI-Powered Analysis
Technical Analysis
CVE-2025-14920 is a deserialization vulnerability classified under CWE-502 found in the Hugging Face Transformers library, specifically within the Perceiver model's handling of model files. The vulnerability stems from the lack of proper validation of user-supplied data during the deserialization process, which is a common vector for remote code execution (RCE) attacks. When a maliciously crafted model file is parsed, it can trigger the execution of arbitrary code with the privileges of the user running the application. The attack requires user interaction, such as opening a malicious file or visiting a malicious webpage that triggers the loading of the compromised model. The vulnerability has a CVSS 3.0 base score of 7.8, reflecting high impact on confidentiality, integrity, and availability, with low attack complexity and no privileges required, but user interaction is necessary. This flaw is particularly critical in environments where Hugging Face Transformers are used to load models from untrusted or external sources, as it can lead to full system compromise. No public patches are currently listed, and no known exploits have been observed in the wild, but the risk remains significant due to the widespread use of Hugging Face in AI/ML applications.
Potential Impact
For European organizations, the impact of CVE-2025-14920 can be substantial, especially for those leveraging AI and machine learning frameworks in production or research environments. Successful exploitation could lead to unauthorized code execution, data theft, manipulation of AI model outputs, or disruption of AI-driven services. This could compromise sensitive data, intellectual property, and critical decision-making processes reliant on AI models. Industries such as finance, healthcare, automotive, and telecommunications, which increasingly integrate AI technologies, may face operational disruptions and reputational damage. Additionally, organizations using Hugging Face Transformers in cloud or hybrid environments could see lateral movement by attackers if the vulnerability is exploited. The requirement for user interaction somewhat limits automated mass exploitation but does not eliminate targeted attacks, especially via phishing or supply chain vectors. The absence of known exploits currently provides a window for proactive mitigation.
Mitigation Recommendations
European organizations should implement strict controls around the sourcing and validation of AI model files, ensuring they come from trusted and verified repositories. Employ sandboxing or containerization techniques when loading models to isolate potential malicious code execution. Regularly update Hugging Face Transformers to the latest versions once patches are released. Implement network-level protections to detect and block suspicious traffic related to model file downloads or interactions. Educate users about the risks of opening untrusted files or clicking unknown links that could trigger model loading. Employ application whitelisting and endpoint detection and response (EDR) solutions to monitor for anomalous behavior indicative of exploitation attempts. Consider integrating integrity checks or cryptographic signatures for model files to prevent tampering. Finally, maintain an incident response plan tailored to AI/ML infrastructure compromises.
Affected Countries
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- zdi
- Date Reserved
- 2025-12-18T20:43:16.275Z
- Cvss Version
- 3.0
- State
- PUBLISHED
Threat ID: 694b064e4eddf7475afca16d
Added to database: 12/23/2025, 9:14:54 PM
Last enriched: 12/23/2025, 9:20:32 PM
Last updated: 12/26/2025, 7:19:10 PM
Views: 8
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need more coverage?
Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.