Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

Picklescan Bugs Allow Malicious PyTorch Models to Evade Scans and Execute Code

0
Critical
Vulnerabilityrcepython
Published: Wed Dec 03 2025 (12/03/2025, 09:30:00 UTC)
Source: The Hacker News

Description

Three critical security flaws have been disclosed in an open-source utility called Picklescan that could allow malicious actors to execute arbitrary code by loading untrusted PyTorch models, effectively bypassing the tool's protections. Picklescan, developed and maintained by Matthieu Maitre (@mmaitre314), is a security scanner that's designed to parse Python pickle files and detect suspicious

AI-Powered Analysis

AILast updated: 12/03/2025, 10:45:22 UTC

Technical Analysis

Picklescan is an open-source utility designed to scan Python pickle files, particularly those used by PyTorch to serialize machine learning models, for malicious code by analyzing bytecode and blocking dangerous imports and operations. However, three critical vulnerabilities (CVE-2025-10155, CVE-2025-10156, CVE-2025-10157) have been discovered that allow attackers to bypass these protections. CVE-2025-10155 exploits a file extension bypass, enabling malicious pickle payloads to be disguised with common PyTorch model extensions like .pt or .bin, thus evading detection. CVE-2025-10156 leverages a CRC error in ZIP archives to disable archive scanning, allowing malicious models packaged in ZIP files to bypass checks. CVE-2025-10157 circumvents the unsafe globals check by evading the blocklist of dangerous imports, enabling arbitrary code execution upon loading the model. These vulnerabilities collectively allow attackers to embed malicious code within PyTorch models that appear safe to Picklescan, facilitating supply chain attacks where compromised models are distributed to unsuspecting users. The flaws highlight systemic challenges in securing AI model pipelines, including reliance on blocklists rather than allowlists, discrepancies between security tools and PyTorch’s file handling, and the rapid evolution of AI libraries outpacing security tooling. The vulnerabilities were responsibly disclosed on June 29, 2025, and addressed in Picklescan version 0.0.31 released on September 9, 2025. Despite patches, the incident underscores the need for continuous adaptive security measures in AI model management.

Potential Impact

For European organizations, especially those involved in AI research, development, and deployment using PyTorch, these vulnerabilities pose a significant risk. Attackers could distribute malicious models that evade detection, leading to arbitrary code execution on systems that load these models. This could result in unauthorized access, data exfiltration, system compromise, and disruption of AI services. Supply chain attacks leveraging these flaws could affect multiple organizations downstream, amplifying the impact. Critical infrastructure and sectors relying on AI-driven decision-making or automation could face operational disruptions or data integrity issues. The complexity and novelty of these attack vectors may also hinder timely detection and response. Organizations using Picklescan as a primary defense may have a false sense of security, increasing exposure. The threat is particularly acute for entities integrating third-party or community-contributed models without stringent validation, common in collaborative AI environments prevalent in Europe.

Mitigation Recommendations

1. Immediately update Picklescan to version 0.0.31 or later to apply the security patches addressing these vulnerabilities. 2. Implement multi-layered scanning strategies that combine allowlisting with behavioral analysis to detect novel malicious payloads beyond blocklists. 3. Restrict loading of PyTorch models to trusted sources and enforce strict provenance and integrity checks, such as cryptographic signatures. 4. Avoid loading untrusted pickle files; where possible, use alternative serialization formats like TensorFlow SavedModel or Flax that do not execute arbitrary code on load. 5. Employ sandboxing or isolated environments for loading and testing new models before deployment in production. 6. Monitor AI model supply chains for unusual activity or unexpected updates. 7. Educate developers and data scientists on the risks of untrusted model loading and secure coding practices. 8. Collaborate with AI security researchers to stay informed about emerging threats and adapt defenses accordingly. 9. Integrate continuous security testing and threat intelligence into AI development pipelines to detect and respond to evolving attack techniques.

Need more detailed analysis?Get Pro

Technical Details

Article Source
{"url":"https://thehackernews.com/2025/12/picklescan-bugs-allow-malicious-pytorch.html","fetched":true,"fetchedAt":"2025-12-03T10:44:35.166Z","wordCount":1180}

Threat ID: 69301494e1f6412a90591c88

Added to database: 12/3/2025, 10:44:36 AM

Last enriched: 12/3/2025, 10:45:22 AM

Last updated: 12/5/2025, 3:12:51 AM

Views: 36

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats