Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-1945: CWE-345 Insufficient Verification of Data Authenticity in mmaitre314 picklescan

0
Medium
VulnerabilityCVE-2025-1945cvecve-2025-1945cwe-345
Published: Mon Mar 10 2025 (03/10/2025, 11:43:02 UTC)
Source: CVE Database V5
Vendor/Project: mmaitre314
Product: picklescan

Description

picklescan before 0.0.23 fails to detect malicious pickle files inside PyTorch model archives when certain ZIP file flag bits are modified. By flipping specific bits in the ZIP file headers, an attacker can embed malicious pickle files that remain undetected by PickleScan while still being successfully loaded by PyTorch's torch.load(). This can lead to arbitrary code execution when loading a compromised model.

AI-Powered Analysis

AILast updated: 12/30/2025, 23:51:34 UTC

Technical Analysis

CVE-2025-1945 identifies a vulnerability in the mmaitre314 picklescan utility, versions prior to 0.0.23, which is designed to detect malicious pickle files embedded within PyTorch model archives. The root cause is an insufficient verification of data authenticity (CWE-345) related to the handling of ZIP file headers. Attackers can exploit this by flipping specific bits in the ZIP file headers, effectively embedding malicious pickle files that picklescan fails to detect. Despite this evasion, PyTorch's torch.load() function still successfully loads these malicious pickle files, leading to arbitrary code execution upon model loading. The vulnerability does not require any privileges or authentication, but user interaction is necessary to trigger the exploit by loading the compromised model. The CVSS 4.0 base score is 5.3 (medium severity), reflecting the network attack vector, low attack complexity, no privileges required, and user interaction needed. The vulnerability impacts the integrity and confidentiality of affected systems by enabling remote code execution through trusted model files. No patches or fixes are currently linked, and no known exploits have been reported in the wild as of the publication date. This vulnerability is particularly relevant for environments that utilize PyTorch models and rely on picklescan for security validation, as it undermines the trustworthiness of model archives.

Potential Impact

For European organizations, this vulnerability poses a significant risk to the security of machine learning workflows that incorporate PyTorch models and use picklescan for verifying model integrity. Successful exploitation could lead to arbitrary code execution within the environment where the compromised model is loaded, potentially resulting in data breaches, unauthorized access, or disruption of services. Sectors heavily reliant on AI/ML, such as finance, healthcare, automotive, and critical infrastructure, could face operational and reputational damage. The attack vector being network-based with no privileges required increases the risk, especially in collaborative or cloud-based ML development environments common in Europe. Furthermore, the evasion of detection by picklescan undermines existing security controls, complicating incident detection and response. Although no known exploits exist yet, the medium severity rating and ease of exploitation warrant proactive measures to prevent potential future attacks.

Mitigation Recommendations

European organizations should immediately update picklescan to version 0.0.23 or later once available to ensure proper detection of malicious pickle files. Until a patch is released, organizations should implement additional verification layers for PyTorch model archives, such as cryptographic signatures or checksums, to validate model authenticity before loading. Restricting the loading of models to trusted sources and environments can reduce exposure. Employ runtime application self-protection (RASP) or endpoint detection and response (EDR) solutions to monitor for anomalous behavior during model loading. Educate data scientists and ML engineers about the risks of loading untrusted models and enforce strict access controls on model repositories. Regularly audit and monitor ML pipelines for suspicious activity. Finally, consider sandboxing model loading processes to contain potential malicious code execution.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
Sonatype
Date Reserved
2025-03-04T12:59:35.306Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 695450bedb813ff03e2bf902

Added to database: 12/30/2025, 10:22:54 PM

Last enriched: 12/30/2025, 11:51:34 PM

Last updated: 2/6/2026, 7:40:44 PM

Views: 48

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats