Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-58757: CWE-502: Deserialization of Untrusted Data in Project-MONAI MONAI

0
High
VulnerabilityCVE-2025-58757cvecve-2025-58757cwe-502
Published: Mon Sep 08 2025 (09/08/2025, 23:42:11 UTC)
Source: CVE Database V5
Vendor/Project: Project-MONAI
Product: MONAI

Description

MONAI (Medical Open Network for AI) is an AI toolkit for health care imaging. In versions up to and including 1.5.0, the `pickle_operations` function in `monai/data/utils.py` automatically handles dictionary key-value pairs ending with a specific suffix and deserializes them using `pickle.loads()` . This function also lacks any security measures. The deserialization may lead to code execution. As of time of publication, no known fixed versions are available.

AI-Powered Analysis

AILast updated: 09/16/2025, 01:09:21 UTC

Technical Analysis

CVE-2025-58757 is a high-severity vulnerability affecting Project-MONAI, an open-source AI toolkit widely used for healthcare imaging applications. The vulnerability stems from insecure deserialization in the `pickle_operations` function located in `monai/data/utils.py` in versions up to and including 1.5.0. This function automatically processes dictionary key-value pairs ending with a specific suffix by deserializing their values using Python's `pickle.loads()` method without any security validation or restrictions. Since `pickle` deserialization can execute arbitrary code embedded within the serialized data, this flaw allows an attacker to craft malicious input that, when deserialized by MONAI, leads to remote code execution (RCE). The vulnerability requires no privileges (PR:N) but does require user interaction (UI:R), such as loading or processing a maliciously crafted data file or input. The attack vector is network-based (AV:N), meaning an attacker can exploit this remotely if the vulnerable MONAI instance processes untrusted data from external sources. The CVSS v3.1 base score is 8.8, reflecting high impact on confidentiality, integrity, and availability, as successful exploitation could allow full system compromise, data theft, or disruption of healthcare imaging workflows. As of the publication date, no patches or fixed versions are available, increasing the risk for organizations relying on MONAI for AI-driven medical imaging tasks. Given MONAI's specialized use in healthcare, this vulnerability poses a significant threat to the confidentiality and integrity of sensitive patient data and the availability of critical diagnostic services.

Potential Impact

For European organizations, especially hospitals, medical research institutions, and healthcare AI service providers using MONAI, this vulnerability could have severe consequences. Exploitation could lead to unauthorized access to sensitive patient imaging data, manipulation or deletion of diagnostic results, and disruption of AI-based diagnostic workflows, potentially delaying patient care. The ability to execute arbitrary code remotely could also allow attackers to pivot within healthcare networks, compromising other critical systems. Given the strict regulatory environment in Europe, including GDPR and healthcare-specific data protection laws, a breach resulting from this vulnerability could lead to significant legal and financial penalties, reputational damage, and loss of patient trust. Additionally, healthcare providers in Europe are increasingly adopting AI tools like MONAI, making this vulnerability a critical risk vector. The lack of available patches further exacerbates the threat, forcing organizations to rely on mitigation strategies until a fix is released.

Mitigation Recommendations

Since no official patches are currently available, European organizations should implement immediate compensating controls. First, restrict MONAI's exposure to untrusted or unauthenticated data sources by enforcing strict input validation and sanitization before any deserialization occurs. Implement network segmentation and firewall rules to limit access to MONAI instances only to trusted internal systems and users. Employ application-level whitelisting to ensure only authorized data formats and sources are processed. Consider running MONAI in isolated, sandboxed environments or containers with minimal privileges to contain potential exploitation impact. Monitor logs and network traffic for unusual deserialization attempts or unexpected execution patterns. Educate staff about the risks of processing untrusted data and enforce strict operational procedures for handling AI model inputs. Finally, maintain close communication with the MONAI project for updates on patches and apply them promptly once available.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2025-09-04T19:18:09.500Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 68bf6ad1d5a2966cfc84363e

Added to database: 9/8/2025, 11:46:25 PM

Last enriched: 9/16/2025, 1:09:21 AM

Last updated: 10/29/2025, 9:49:53 AM

Views: 64

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats