CVE-2025-58757: CWE-502: Deserialization of Untrusted Data in Project-MONAI MONAI
MONAI (Medical Open Network for AI) is an AI toolkit for health care imaging. In versions up to and including 1.5.0, the `pickle_operations` function in `monai/data/utils.py` automatically handles dictionary key-value pairs ending with a specific suffix and deserializes them using `pickle.loads()` . This function also lacks any security measures. The deserialization may lead to code execution. As of time of publication, no known fixed versions are available.
AI Analysis
Technical Summary
CVE-2025-58757 is a high-severity vulnerability affecting Project-MONAI, an open-source AI toolkit widely used for healthcare imaging applications. The vulnerability stems from insecure deserialization in the `pickle_operations` function located in `monai/data/utils.py` in versions up to and including 1.5.0. This function automatically processes dictionary key-value pairs ending with a specific suffix by deserializing their values using Python's `pickle.loads()` method without any security validation or restrictions. Since `pickle` deserialization can execute arbitrary code embedded within the serialized data, this flaw allows an attacker to craft malicious input that, when deserialized by MONAI, leads to remote code execution (RCE). The vulnerability requires no privileges (PR:N) but does require user interaction (UI:R), such as loading or processing a maliciously crafted data file or input. The attack vector is network-based (AV:N), meaning an attacker can exploit this remotely if the vulnerable MONAI instance processes untrusted data from external sources. The CVSS v3.1 base score is 8.8, reflecting high impact on confidentiality, integrity, and availability, as successful exploitation could allow full system compromise, data theft, or disruption of healthcare imaging workflows. As of the publication date, no patches or fixed versions are available, increasing the risk for organizations relying on MONAI for AI-driven medical imaging tasks. Given MONAI's specialized use in healthcare, this vulnerability poses a significant threat to the confidentiality and integrity of sensitive patient data and the availability of critical diagnostic services.
Potential Impact
For European organizations, especially hospitals, medical research institutions, and healthcare AI service providers using MONAI, this vulnerability could have severe consequences. Exploitation could lead to unauthorized access to sensitive patient imaging data, manipulation or deletion of diagnostic results, and disruption of AI-based diagnostic workflows, potentially delaying patient care. The ability to execute arbitrary code remotely could also allow attackers to pivot within healthcare networks, compromising other critical systems. Given the strict regulatory environment in Europe, including GDPR and healthcare-specific data protection laws, a breach resulting from this vulnerability could lead to significant legal and financial penalties, reputational damage, and loss of patient trust. Additionally, healthcare providers in Europe are increasingly adopting AI tools like MONAI, making this vulnerability a critical risk vector. The lack of available patches further exacerbates the threat, forcing organizations to rely on mitigation strategies until a fix is released.
Mitigation Recommendations
Since no official patches are currently available, European organizations should implement immediate compensating controls. First, restrict MONAI's exposure to untrusted or unauthenticated data sources by enforcing strict input validation and sanitization before any deserialization occurs. Implement network segmentation and firewall rules to limit access to MONAI instances only to trusted internal systems and users. Employ application-level whitelisting to ensure only authorized data formats and sources are processed. Consider running MONAI in isolated, sandboxed environments or containers with minimal privileges to contain potential exploitation impact. Monitor logs and network traffic for unusual deserialization attempts or unexpected execution patterns. Educate staff about the risks of processing untrusted data and enforce strict operational procedures for handling AI model inputs. Finally, maintain close communication with the MONAI project for updates on patches and apply them promptly once available.
Affected Countries
Germany, France, United Kingdom, Italy, Spain, Netherlands, Sweden, Belgium, Switzerland, Denmark
CVE-2025-58757: CWE-502: Deserialization of Untrusted Data in Project-MONAI MONAI
Description
MONAI (Medical Open Network for AI) is an AI toolkit for health care imaging. In versions up to and including 1.5.0, the `pickle_operations` function in `monai/data/utils.py` automatically handles dictionary key-value pairs ending with a specific suffix and deserializes them using `pickle.loads()` . This function also lacks any security measures. The deserialization may lead to code execution. As of time of publication, no known fixed versions are available.
AI-Powered Analysis
Technical Analysis
CVE-2025-58757 is a high-severity vulnerability affecting Project-MONAI, an open-source AI toolkit widely used for healthcare imaging applications. The vulnerability stems from insecure deserialization in the `pickle_operations` function located in `monai/data/utils.py` in versions up to and including 1.5.0. This function automatically processes dictionary key-value pairs ending with a specific suffix by deserializing their values using Python's `pickle.loads()` method without any security validation or restrictions. Since `pickle` deserialization can execute arbitrary code embedded within the serialized data, this flaw allows an attacker to craft malicious input that, when deserialized by MONAI, leads to remote code execution (RCE). The vulnerability requires no privileges (PR:N) but does require user interaction (UI:R), such as loading or processing a maliciously crafted data file or input. The attack vector is network-based (AV:N), meaning an attacker can exploit this remotely if the vulnerable MONAI instance processes untrusted data from external sources. The CVSS v3.1 base score is 8.8, reflecting high impact on confidentiality, integrity, and availability, as successful exploitation could allow full system compromise, data theft, or disruption of healthcare imaging workflows. As of the publication date, no patches or fixed versions are available, increasing the risk for organizations relying on MONAI for AI-driven medical imaging tasks. Given MONAI's specialized use in healthcare, this vulnerability poses a significant threat to the confidentiality and integrity of sensitive patient data and the availability of critical diagnostic services.
Potential Impact
For European organizations, especially hospitals, medical research institutions, and healthcare AI service providers using MONAI, this vulnerability could have severe consequences. Exploitation could lead to unauthorized access to sensitive patient imaging data, manipulation or deletion of diagnostic results, and disruption of AI-based diagnostic workflows, potentially delaying patient care. The ability to execute arbitrary code remotely could also allow attackers to pivot within healthcare networks, compromising other critical systems. Given the strict regulatory environment in Europe, including GDPR and healthcare-specific data protection laws, a breach resulting from this vulnerability could lead to significant legal and financial penalties, reputational damage, and loss of patient trust. Additionally, healthcare providers in Europe are increasingly adopting AI tools like MONAI, making this vulnerability a critical risk vector. The lack of available patches further exacerbates the threat, forcing organizations to rely on mitigation strategies until a fix is released.
Mitigation Recommendations
Since no official patches are currently available, European organizations should implement immediate compensating controls. First, restrict MONAI's exposure to untrusted or unauthenticated data sources by enforcing strict input validation and sanitization before any deserialization occurs. Implement network segmentation and firewall rules to limit access to MONAI instances only to trusted internal systems and users. Employ application-level whitelisting to ensure only authorized data formats and sources are processed. Consider running MONAI in isolated, sandboxed environments or containers with minimal privileges to contain potential exploitation impact. Monitor logs and network traffic for unusual deserialization attempts or unexpected execution patterns. Educate staff about the risks of processing untrusted data and enforce strict operational procedures for handling AI model inputs. Finally, maintain close communication with the MONAI project for updates on patches and apply them promptly once available.
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- GitHub_M
- Date Reserved
- 2025-09-04T19:18:09.500Z
- Cvss Version
- 3.1
- State
- PUBLISHED
Threat ID: 68bf6ad1d5a2966cfc84363e
Added to database: 9/8/2025, 11:46:25 PM
Last enriched: 9/16/2025, 1:09:21 AM
Last updated: 10/30/2025, 4:52:19 PM
Views: 65
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
CVE-2025-62726: CWE-829: Inclusion of Functionality from Untrusted Control Sphere in n8n-io n8n
HighCVE-2025-61121: n/a
UnknownCVE-2025-61120: n/a
UnknownCVE-2025-60319: n/a
UnknownCVE-2024-7652: Vulnerability in Mozilla Firefox
HighActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.