CVE-2025-58757: CWE-502: Deserialization of Untrusted Data in Project-MONAI MONAI
MONAI (Medical Open Network for AI) is an AI toolkit for health care imaging. In versions up to and including 1.5.0, the `pickle_operations` function in `monai/data/utils.py` automatically handles dictionary key-value pairs ending with a specific suffix and deserializes them using `pickle.loads()` . This function also lacks any security measures. The deserialization may lead to code execution. As of time of publication, no known fixed versions are available.
AI Analysis
Technical Summary
CVE-2025-58757 is a high-severity vulnerability affecting Project-MONAI, an AI toolkit widely used in healthcare imaging. The vulnerability arises from insecure deserialization in the `pickle_operations` function within the `monai/data/utils.py` module in versions up to and including 1.5.0. Specifically, this function automatically processes dictionary key-value pairs that end with a particular suffix by deserializing their values using Python's `pickle.loads()` method without any validation or security controls. Since `pickle` deserialization can execute arbitrary code if the input is crafted maliciously, this flaw enables an attacker to achieve remote code execution (RCE) on systems running vulnerable MONAI versions. The vulnerability requires no prior authentication but does require user interaction, such as loading or processing maliciously crafted data files or inputs. No patches or fixed versions are currently available at the time of publication, increasing the risk for organizations relying on MONAI in their healthcare AI workflows. The CVSS v3.1 base score is 8.8, reflecting high impact on confidentiality, integrity, and availability, with low attack complexity and no privileges required. Although no known exploits are currently observed in the wild, the nature of the vulnerability and MONAI's role in sensitive healthcare environments make this a critical concern for medical institutions and AI service providers using this toolkit.
Potential Impact
For European organizations, the impact of this vulnerability is significant due to MONAI's specialized use in healthcare imaging AI, a sector with stringent data protection and patient safety requirements. Exploitation could lead to unauthorized code execution on systems processing medical images, potentially compromising patient data confidentiality, altering diagnostic results (integrity), or disrupting availability of critical AI services. This could result in severe regulatory consequences under GDPR, damage to organizational reputation, and direct harm to patient care. Additionally, healthcare providers and research institutions using MONAI for AI-driven diagnostics or treatment planning are at risk of targeted attacks aiming to manipulate or sabotage AI outputs. The lack of a patch increases exposure time, and the ease of exploitation means attackers could weaponize maliciously crafted input files or data streams to infiltrate networks. The threat also extends to AI service vendors and cloud platforms hosting MONAI-based applications, potentially affecting broader healthcare supply chains across Europe.
Mitigation Recommendations
Given the absence of an official patch, European organizations should implement immediate compensating controls. First, restrict and monitor the sources of input data processed by MONAI, ensuring only trusted and validated datasets are accepted. Employ strict network segmentation and access controls to limit exposure of MONAI instances to untrusted users or external networks. Use application-level whitelisting or sandboxing to isolate MONAI processes and prevent unauthorized code execution. Where feasible, disable or replace the vulnerable `pickle_operations` function with safer serialization methods that do not use `pickle.loads()`, such as JSON or other secure formats, after thorough testing. Monitor logs and system behavior for suspicious activity indicative of exploitation attempts. Engage with the MONAI community and vendors for updates or patches, and plan for rapid deployment once available. Additionally, conduct security awareness training for staff handling AI data inputs to recognize and avoid processing untrusted or suspicious files.
Affected Countries
Germany, France, United Kingdom, Italy, Spain, Netherlands, Sweden, Belgium, Switzerland, Austria
CVE-2025-58757: CWE-502: Deserialization of Untrusted Data in Project-MONAI MONAI
Description
MONAI (Medical Open Network for AI) is an AI toolkit for health care imaging. In versions up to and including 1.5.0, the `pickle_operations` function in `monai/data/utils.py` automatically handles dictionary key-value pairs ending with a specific suffix and deserializes them using `pickle.loads()` . This function also lacks any security measures. The deserialization may lead to code execution. As of time of publication, no known fixed versions are available.
AI-Powered Analysis
Technical Analysis
CVE-2025-58757 is a high-severity vulnerability affecting Project-MONAI, an AI toolkit widely used in healthcare imaging. The vulnerability arises from insecure deserialization in the `pickle_operations` function within the `monai/data/utils.py` module in versions up to and including 1.5.0. Specifically, this function automatically processes dictionary key-value pairs that end with a particular suffix by deserializing their values using Python's `pickle.loads()` method without any validation or security controls. Since `pickle` deserialization can execute arbitrary code if the input is crafted maliciously, this flaw enables an attacker to achieve remote code execution (RCE) on systems running vulnerable MONAI versions. The vulnerability requires no prior authentication but does require user interaction, such as loading or processing maliciously crafted data files or inputs. No patches or fixed versions are currently available at the time of publication, increasing the risk for organizations relying on MONAI in their healthcare AI workflows. The CVSS v3.1 base score is 8.8, reflecting high impact on confidentiality, integrity, and availability, with low attack complexity and no privileges required. Although no known exploits are currently observed in the wild, the nature of the vulnerability and MONAI's role in sensitive healthcare environments make this a critical concern for medical institutions and AI service providers using this toolkit.
Potential Impact
For European organizations, the impact of this vulnerability is significant due to MONAI's specialized use in healthcare imaging AI, a sector with stringent data protection and patient safety requirements. Exploitation could lead to unauthorized code execution on systems processing medical images, potentially compromising patient data confidentiality, altering diagnostic results (integrity), or disrupting availability of critical AI services. This could result in severe regulatory consequences under GDPR, damage to organizational reputation, and direct harm to patient care. Additionally, healthcare providers and research institutions using MONAI for AI-driven diagnostics or treatment planning are at risk of targeted attacks aiming to manipulate or sabotage AI outputs. The lack of a patch increases exposure time, and the ease of exploitation means attackers could weaponize maliciously crafted input files or data streams to infiltrate networks. The threat also extends to AI service vendors and cloud platforms hosting MONAI-based applications, potentially affecting broader healthcare supply chains across Europe.
Mitigation Recommendations
Given the absence of an official patch, European organizations should implement immediate compensating controls. First, restrict and monitor the sources of input data processed by MONAI, ensuring only trusted and validated datasets are accepted. Employ strict network segmentation and access controls to limit exposure of MONAI instances to untrusted users or external networks. Use application-level whitelisting or sandboxing to isolate MONAI processes and prevent unauthorized code execution. Where feasible, disable or replace the vulnerable `pickle_operations` function with safer serialization methods that do not use `pickle.loads()`, such as JSON or other secure formats, after thorough testing. Monitor logs and system behavior for suspicious activity indicative of exploitation attempts. Engage with the MONAI community and vendors for updates or patches, and plan for rapid deployment once available. Additionally, conduct security awareness training for staff handling AI data inputs to recognize and avoid processing untrusted or suspicious files.
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- GitHub_M
- Date Reserved
- 2025-09-04T19:18:09.500Z
- Cvss Version
- 3.1
- State
- PUBLISHED
Threat ID: 68bf6ad1d5a2966cfc84363e
Added to database: 9/8/2025, 11:46:25 PM
Last enriched: 9/9/2025, 12:01:33 AM
Last updated: 9/9/2025, 9:12:27 PM
Views: 8
Related Threats
CVE-2025-10197: SQL Injection in HJSoft HCM Human Resources Management System
MediumCVE-2025-10195: Improper Export of Android Application Components in Seismic App
MediumCVE-2025-21417: CWE-122: Heap-based Buffer Overflow in Microsoft Windows 10 Version 1809
HighCVE-2025-21409: CWE-122: Heap-based Buffer Overflow in Microsoft Windows 10 Version 1809
HighCVE-2025-21336: CWE-203: Observable Discrepancy in Microsoft Windows 10 Version 1809
MediumActions
Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.
External Links
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.