Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2024-11393: CWE-502: Deserialization of Untrusted Data in Hugging Face Transformers

0
High
VulnerabilityCVE-2024-11393cvecve-2024-11393cwe-502
Published: Fri Nov 22 2024 (11/22/2024, 21:23:38 UTC)
Source: CVE Database V5
Vendor/Project: Hugging Face
Product: Transformers

Description

Hugging Face Transformers MaskFormer Model Deserialization of Untrusted Data Remote Code Execution Vulnerability. This vulnerability allows remote attackers to execute arbitrary code on affected installations of Hugging Face Transformers. User interaction is required to exploit this vulnerability in that the target must visit a malicious page or open a malicious file. The specific flaw exists within the parsing of model files. The issue results from the lack of proper validation of user-supplied data, which can result in deserialization of untrusted data. An attacker can leverage this vulnerability to execute code in the context of the current user. Was ZDI-CAN-25191.

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 02/26/2026, 13:32:56 UTC

Technical Analysis

CVE-2024-11393 is a deserialization vulnerability classified under CWE-502 found in the Hugging Face Transformers library, specifically within the MaskFormer model's deserialization mechanism. The vulnerability stems from insufficient validation of user-supplied data during the parsing of model files, which allows an attacker to craft malicious model files that, when deserialized, execute arbitrary code remotely. This flaw can be exploited remotely without requiring authentication but does require user interaction, such as opening a malicious file or visiting a malicious webpage that triggers the deserialization process. The vulnerability impacts confidentiality, integrity, and availability by enabling attackers to run code with the same privileges as the user running the Transformers library, potentially leading to full system compromise. The CVSS 3.0 score of 8.8 indicates a high severity with network attack vector, low attack complexity, no privileges required, but user interaction necessary. No public exploits are known at this time, but the widespread use of Hugging Face Transformers in AI/ML workflows makes this a significant concern. The vulnerability was assigned by ZDI (ZDI-CAN-25191) and published on November 22, 2024. The affected version is identified by a specific commit hash, suggesting the issue may be limited to certain recent builds or versions. The lack of patch links indicates that a fix may not yet be publicly available, emphasizing the need for cautious handling of untrusted model files.

Potential Impact

This vulnerability poses a serious risk to organizations leveraging Hugging Face Transformers for machine learning and AI applications. Successful exploitation can lead to arbitrary code execution, allowing attackers to compromise systems, steal sensitive data, manipulate AI models, or disrupt services. Since the attack requires user interaction, phishing or social engineering could be vectors to deliver malicious model files or links. The impact extends to data confidentiality, as attackers could access sensitive datasets or intellectual property embedded in AI workflows. Integrity is at risk because attackers could alter model behavior or outputs, undermining trust in AI-driven decisions. Availability could also be affected if attackers deploy ransomware or disrupt AI services. Given the growing adoption of Hugging Face Transformers across industries such as technology, finance, healthcare, and government, the potential scope is broad. Organizations using these models in cloud or on-premises environments without strict input validation are particularly vulnerable. The absence of known exploits in the wild provides a window for proactive mitigation, but the high CVSS score and ease of exploitation warrant urgent attention.

Mitigation Recommendations

1. Immediately restrict the loading of model files to trusted sources only; avoid loading models from unverified or user-supplied inputs. 2. Implement strict validation and sanitization of all model files before deserialization, including checksum verification and format validation. 3. Employ sandboxing or containerization to isolate the execution environment of model loading processes, limiting the impact of potential code execution. 4. Monitor user interactions that involve loading external model files and educate users about the risks of opening untrusted files or links. 5. Stay updated with Hugging Face security advisories and apply patches or updates as soon as they become available. 6. Consider using application-level allowlists for model files and disable or restrict deserialization features if not required. 7. Employ runtime detection tools that can identify anomalous behavior indicative of exploitation attempts. 8. Review and harden access controls and privilege levels for users and services running the Transformers library to minimize damage from potential exploits.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Data Version
5.1
Assigner Short Name
zdi
Date Reserved
2024-11-18T23:29:51.422Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 699f6e12b7ef31ef0b594a7f

Added to database: 2/25/2026, 9:48:02 PM

Last enriched: 2/26/2026, 1:32:56 PM

Last updated: 4/12/2026, 5:32:05 PM

Views: 27

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses