CVE-2025-58756: CWE-502: Deserialization of Untrusted Data in Project-MONAI MONAI
MONAI (Medical Open Network for AI) is an AI toolkit for health care imaging. In versions up to and including 1.5.0, in `model_dict = torch.load(full_path, map_location=torch.device(device), weights_only=True)` in monai/bundle/scripts.py , `weights_only=True` is loaded securely. However, insecure loading methods still exist elsewhere in the project, such as when loading checkpoints. This is a common practice when users want to reduce training time and costs by loading pre-trained models downloaded from other platforms. Loading a checkpoint containing malicious content can trigger a deserialization vulnerability, leading to code execution. As of time of publication, no known fixed versions are available.
AI Analysis
Technical Summary
CVE-2025-58756 is a high-severity deserialization vulnerability (CWE-502) affecting Project-MONAI, an open-source AI toolkit widely used for healthcare imaging applications. The vulnerability exists in versions up to and including 1.5.0. Specifically, while the function torch.load(full_path, map_location=torch.device(device), weights_only=True) in monai/bundle/scripts.py uses the 'weights_only=True' parameter to securely load model weights, other parts of the MONAI project still use insecure deserialization methods when loading checkpoints. This insecure loading process can be exploited by an attacker who provides a crafted checkpoint file containing malicious serialized data. When such a checkpoint is loaded, it can trigger arbitrary code execution on the host system due to unsafe deserialization practices. This vulnerability is particularly critical because loading pre-trained models or checkpoints from untrusted or compromised sources is a common practice to reduce AI training time and costs. The vulnerability has a CVSS 3.1 base score of 8.8, indicating high impact with network attack vector, low attack complexity, requiring low privileges but no user interaction, and affecting confidentiality, integrity, and availability. As of the publication date, no patches or fixed versions are available, increasing the risk for users who rely on MONAI for medical imaging AI workflows. Although no known exploits are reported in the wild yet, the potential for exploitation is significant given the critical nature of healthcare data and AI model integrity.
Potential Impact
For European organizations, especially those in healthcare and medical research sectors, this vulnerability poses a significant risk. MONAI is used for AI-driven medical imaging analysis, which often involves sensitive patient data protected under GDPR and other privacy regulations. Exploitation could lead to unauthorized code execution on systems processing medical images, potentially resulting in data breaches, manipulation of diagnostic AI outputs, or disruption of critical healthcare services. The compromise of AI models could undermine trust in automated diagnostic tools, delay patient care, and cause regulatory and reputational damage. Additionally, since MONAI is open-source and widely adopted in academic and clinical research institutions across Europe, the attack surface is broad. The vulnerability could also be leveraged as a foothold for lateral movement within healthcare networks, impacting availability and integrity of healthcare IT infrastructure.
Mitigation Recommendations
European organizations using MONAI should immediately audit their usage of checkpoint loading functions and avoid loading checkpoints from untrusted or unauthenticated sources. Until an official patch is released, organizations should implement strict source validation and integrity checks (e.g., cryptographic signatures) on all model checkpoints before loading. Employ sandboxing or containerization to isolate MONAI processes to limit the impact of potential code execution. Monitoring and alerting on unusual process behaviors or network activity originating from AI model loading workflows can help detect exploitation attempts. Collaborate with the MONAI community to track patch releases and apply updates promptly. Additionally, consider restricting network access and applying strict access controls to systems running MONAI to reduce exposure. Training staff on the risks of loading untrusted AI models and establishing policies for secure model management are also critical.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Italy, Spain, Belgium, Switzerland, Denmark
CVE-2025-58756: CWE-502: Deserialization of Untrusted Data in Project-MONAI MONAI
Description
MONAI (Medical Open Network for AI) is an AI toolkit for health care imaging. In versions up to and including 1.5.0, in `model_dict = torch.load(full_path, map_location=torch.device(device), weights_only=True)` in monai/bundle/scripts.py , `weights_only=True` is loaded securely. However, insecure loading methods still exist elsewhere in the project, such as when loading checkpoints. This is a common practice when users want to reduce training time and costs by loading pre-trained models downloaded from other platforms. Loading a checkpoint containing malicious content can trigger a deserialization vulnerability, leading to code execution. As of time of publication, no known fixed versions are available.
AI-Powered Analysis
Technical Analysis
CVE-2025-58756 is a high-severity deserialization vulnerability (CWE-502) affecting Project-MONAI, an open-source AI toolkit widely used for healthcare imaging applications. The vulnerability exists in versions up to and including 1.5.0. Specifically, while the function torch.load(full_path, map_location=torch.device(device), weights_only=True) in monai/bundle/scripts.py uses the 'weights_only=True' parameter to securely load model weights, other parts of the MONAI project still use insecure deserialization methods when loading checkpoints. This insecure loading process can be exploited by an attacker who provides a crafted checkpoint file containing malicious serialized data. When such a checkpoint is loaded, it can trigger arbitrary code execution on the host system due to unsafe deserialization practices. This vulnerability is particularly critical because loading pre-trained models or checkpoints from untrusted or compromised sources is a common practice to reduce AI training time and costs. The vulnerability has a CVSS 3.1 base score of 8.8, indicating high impact with network attack vector, low attack complexity, requiring low privileges but no user interaction, and affecting confidentiality, integrity, and availability. As of the publication date, no patches or fixed versions are available, increasing the risk for users who rely on MONAI for medical imaging AI workflows. Although no known exploits are reported in the wild yet, the potential for exploitation is significant given the critical nature of healthcare data and AI model integrity.
Potential Impact
For European organizations, especially those in healthcare and medical research sectors, this vulnerability poses a significant risk. MONAI is used for AI-driven medical imaging analysis, which often involves sensitive patient data protected under GDPR and other privacy regulations. Exploitation could lead to unauthorized code execution on systems processing medical images, potentially resulting in data breaches, manipulation of diagnostic AI outputs, or disruption of critical healthcare services. The compromise of AI models could undermine trust in automated diagnostic tools, delay patient care, and cause regulatory and reputational damage. Additionally, since MONAI is open-source and widely adopted in academic and clinical research institutions across Europe, the attack surface is broad. The vulnerability could also be leveraged as a foothold for lateral movement within healthcare networks, impacting availability and integrity of healthcare IT infrastructure.
Mitigation Recommendations
European organizations using MONAI should immediately audit their usage of checkpoint loading functions and avoid loading checkpoints from untrusted or unauthenticated sources. Until an official patch is released, organizations should implement strict source validation and integrity checks (e.g., cryptographic signatures) on all model checkpoints before loading. Employ sandboxing or containerization to isolate MONAI processes to limit the impact of potential code execution. Monitoring and alerting on unusual process behaviors or network activity originating from AI model loading workflows can help detect exploitation attempts. Collaborate with the MONAI community to track patch releases and apply updates promptly. Additionally, consider restricting network access and applying strict access controls to systems running MONAI to reduce exposure. Training staff on the risks of loading untrusted AI models and establishing policies for secure model management are also critical.
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- GitHub_M
- Date Reserved
- 2025-09-04T19:18:09.499Z
- Cvss Version
- 3.1
- State
- PUBLISHED
Threat ID: 68bf6ad1d5a2966cfc84363b
Added to database: 9/8/2025, 11:46:25 PM
Last enriched: 9/9/2025, 12:01:43 AM
Last updated: 9/9/2025, 9:12:27 PM
Views: 6
Related Threats
CVE-2025-21415: CWE-290: Authentication Bypass by Spoofing in Microsoft Azure AI Face Service
CriticalCVE-2025-21413: CWE-122: Heap-based Buffer Overflow in Microsoft Windows 10 Version 1809
HighCVE-2025-21411: CWE-122: Heap-based Buffer Overflow in Microsoft Windows 10 Version 1809
HighCVE-2025-21405: CWE-284: Improper Access Control in Microsoft Microsoft Visual Studio 2022 version 17.12
HighCVE-2025-21403: CWE-863: Incorrect Authorization in Microsoft On-Premises Data Gateway
MediumActions
Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.
External Links
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.