Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-58756: CWE-502: Deserialization of Untrusted Data in Project-MONAI MONAI

0
High
VulnerabilityCVE-2025-58756cvecve-2025-58756cwe-502
Published: Mon Sep 08 2025 (09/08/2025, 23:39:55 UTC)
Source: CVE Database V5
Vendor/Project: Project-MONAI
Product: MONAI

Description

MONAI (Medical Open Network for AI) is an AI toolkit for health care imaging. In versions up to and including 1.5.0, in `model_dict = torch.load(full_path, map_location=torch.device(device), weights_only=True)` in monai/bundle/scripts.py , `weights_only=True` is loaded securely. However, insecure loading methods still exist elsewhere in the project, such as when loading checkpoints. This is a common practice when users want to reduce training time and costs by loading pre-trained models downloaded from other platforms. Loading a checkpoint containing malicious content can trigger a deserialization vulnerability, leading to code execution. As of time of publication, no known fixed versions are available.

AI-Powered Analysis

AILast updated: 09/16/2025, 01:09:10 UTC

Technical Analysis

CVE-2025-58756 is a high-severity deserialization vulnerability (CWE-502) affecting Project MONAI, an open-source AI toolkit designed for healthcare imaging applications. The vulnerability exists in versions up to and including 1.5.0. Specifically, the issue arises from insecure deserialization methods used when loading model checkpoints. While the code snippet `model_dict = torch.load(full_path, map_location=torch.device(device), weights_only=True)` in monai/bundle/scripts.py uses the `weights_only=True` parameter to securely load model weights, other parts of the project still use less secure loading mechanisms. These insecure methods allow an attacker to craft malicious checkpoint files that, when deserialized by the vulnerable MONAI code, can execute arbitrary code on the host system. This is particularly dangerous because users often load pre-trained models from external sources to reduce training time and costs, making it easier for attackers to distribute malicious checkpoints. The vulnerability has a CVSS v3.1 score of 8.8, indicating high severity, with an attack vector of network (remote exploitation), low attack complexity, requiring low privileges but no user interaction, and impacting confidentiality, integrity, and availability. As of the publication date, no patches or fixed versions are available, increasing the risk for organizations using MONAI in production or research environments. The vulnerability could allow attackers to compromise sensitive healthcare imaging data, manipulate AI model outputs, or disrupt critical AI-driven healthcare workflows.

Potential Impact

For European organizations, the impact of this vulnerability is significant due to the widespread adoption of AI in healthcare imaging and diagnostics. MONAI is used in medical research institutions, hospitals, and healthcare technology companies that rely on AI models for image analysis, diagnosis assistance, and treatment planning. Exploitation could lead to unauthorized access to sensitive patient data, manipulation of diagnostic results, or disruption of AI services, potentially causing misdiagnosis or delayed treatment. Given the strict data protection regulations in Europe, such as GDPR, a breach involving patient data could result in severe legal and financial consequences. Furthermore, the integrity compromise of AI models could undermine trust in AI-assisted healthcare solutions. The lack of a patch means organizations must rely on mitigation strategies to prevent exploitation. The threat also extends to research collaborations and cloud-based AI services that share or distribute pre-trained models, increasing the attack surface.

Mitigation Recommendations

1. Avoid loading MONAI model checkpoints from untrusted or unauthenticated sources. Only use checkpoints obtained from verified, trusted repositories or internal sources. 2. Implement strict validation and integrity checks (e.g., digital signatures or cryptographic hashes) on all model checkpoint files before loading them. 3. Where possible, use the secure loading method with `weights_only=True` as demonstrated in monai/bundle/scripts.py to limit deserialization scope. 4. Run MONAI workloads in isolated, sandboxed environments or containers with minimal privileges to contain potential exploitation impact. 5. Monitor and audit usage of model loading functions to detect anomalous or unauthorized checkpoint loading activities. 6. Engage with the MONAI project community and maintain awareness of updates or patches addressing this vulnerability. 7. Consider network-level protections such as restricting access to model repositories and employing intrusion detection systems tuned for suspicious deserialization activities. 8. Educate developers and data scientists about the risks of deserializing untrusted data and enforce secure coding practices around model loading.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2025-09-04T19:18:09.499Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 68bf6ad1d5a2966cfc84363b

Added to database: 9/8/2025, 11:46:25 PM

Last enriched: 9/16/2025, 1:09:10 AM

Last updated: 10/30/2025, 4:09:49 PM

Views: 54

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats