Skip to main content

CVE-2025-54886: CWE-502: Deserialization of Untrusted Data in skops-dev skops

High
VulnerabilityCVE-2025-54886cvecve-2025-54886cwe-502
Published: Fri Aug 08 2025 (08/08/2025, 00:03:45 UTC)
Source: CVE Database V5
Vendor/Project: skops-dev
Product: skops

Description

skops is a Python library which helps users share and ship their scikit-learn based models. In versions 0.12.0 and below, the Card.get_model does not contain any logic to prevent arbitrary code execution. The Card.get_model function supports both joblib and skops for model loading. When loading .skops models, it uses skops' secure loading with trusted type validation, raising errors for untrusted types unless explicitly allowed. However, when non-.zip file formats are provided, the function silently falls back to joblib without warning. Unlike skops, joblib allows arbitrary code execution during loading, bypassing security measures and potentially enabling malicious code execution. This issue is fixed in version 0.13.0.

AI-Powered Analysis

AILast updated: 08/15/2025, 01:13:26 UTC

Technical Analysis

CVE-2025-54886 is a high-severity vulnerability affecting the skops Python library, versions prior to 0.13.0. Skops facilitates sharing and shipping of scikit-learn based machine learning models. The vulnerability arises from the Card.get_model function, which is responsible for loading models serialized in either joblib or skops formats. While skops employs secure loading mechanisms with trusted type validation to prevent arbitrary code execution, the function silently falls back to joblib when non-.zip file formats are provided without any warning or validation. Joblib’s deserialization process is inherently unsafe as it allows arbitrary code execution during model loading. This fallback behavior effectively bypasses skops’ security controls, enabling an attacker to craft malicious serialized models that, when loaded, execute arbitrary code on the host system. The vulnerability is categorized under CWE-502 (Deserialization of Untrusted Data), a common vector for remote code execution attacks. Exploitation requires an attacker to supply a malicious model file to the vulnerable application using skops. No user interaction or authentication is required, and the attack surface includes any system that loads untrusted or unauthenticated model files using vulnerable versions of skops. The issue was addressed in skops version 0.13.0 by removing the unsafe fallback and enforcing secure loading policies. The CVSS v3.1 base score is 8.4, reflecting high impact on confidentiality, integrity, and availability due to potential arbitrary code execution. No known exploits are reported in the wild as of the publication date.

Potential Impact

For European organizations, this vulnerability poses a significant risk, especially those leveraging machine learning workflows that involve model sharing or deployment using skops. Compromise could lead to full system takeover, data exfiltration, or disruption of critical ML-driven services. Sectors such as finance, healthcare, automotive, and manufacturing, which increasingly rely on ML models for decision-making and automation, could face operational disruptions and data breaches. The silent fallback to joblib increases the risk of unnoticed exploitation, potentially allowing attackers to embed malicious payloads within model files distributed internally or externally. Given the high confidentiality and integrity impact, sensitive personal data protected under GDPR could be exposed or manipulated, leading to regulatory penalties and reputational damage. The vulnerability also threatens the availability of ML services, which may be critical for real-time analytics or safety systems. Since no authentication or user interaction is required, automated attacks or supply chain compromises are plausible, amplifying the threat to European organizations that integrate skops in their ML pipelines.

Mitigation Recommendations

European organizations should immediately upgrade all instances of skops to version 0.13.0 or later to eliminate the unsafe fallback to joblib. Until upgrades are complete, organizations must implement strict validation and integrity checks on all model files before loading, including cryptographic signatures or checksums to ensure authenticity. Restrict model loading to trusted sources only and avoid loading models from unverified or external inputs. Employ runtime application self-protection (RASP) or endpoint detection and response (EDR) tools to monitor for suspicious process behaviors indicative of arbitrary code execution. Incorporate network segmentation and least privilege principles to limit the impact of potential exploitation. Additionally, conduct thorough code reviews and penetration testing focused on ML model loading components. Educate data scientists and ML engineers about the risks of deserializing untrusted data and enforce secure development lifecycle practices around model serialization and deployment. Finally, maintain up-to-date inventories of ML libraries and dependencies to rapidly respond to emerging vulnerabilities.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2025-07-31T17:23:33.476Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 6895495bad5a09ad00fe8c61

Added to database: 8/8/2025, 12:48:27 AM

Last enriched: 8/15/2025, 1:13:26 AM

Last updated: 9/22/2025, 8:55:14 PM

Views: 39

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats