Skip to main content

CVE-2025-9906: CWE-502 Deserialization of Untrusted Data in Keras-team Keras

High
VulnerabilityCVE-2025-9906cvecve-2025-9906cwe-502
Published: Fri Sep 19 2025 (09/19/2025, 08:15:04 UTC)
Source: CVE Database V5
Vendor/Project: Keras-team
Product: Keras

Description

The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .keras model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special config.json (a file within the .keras archive) that will invoke keras.config.enable_unsafe_deserialization() to disable safe mode. Once safe mode is disable, one can use the Lambda layer feature of keras, which allows arbitrary Python code in the form of pickled code. Both can appear in the same archive. Simply the keras.config.enable_unsafe_deserialization() needs to appear first in the archive and the Lambda with arbitrary code needs to be second.

AI-Powered Analysis

AILast updated: 09/19/2025, 08:21:52 UTC

Technical Analysis

CVE-2025-9906 is a critical deserialization vulnerability (CWE-502) affecting the Keras deep learning framework, specifically version 3.0.0. The vulnerability arises from the way the Keras Model.load_model method processes .keras model archives. These archives contain a config.json file that configures model loading behavior. An attacker can craft a malicious .keras archive where the config.json file calls keras.config.enable_unsafe_deserialization(), which disables the safe_mode protection intended to prevent unsafe code execution during model loading. Once safe_mode is disabled, the attacker can leverage the Lambda layer feature in Keras, which allows embedding arbitrary Python code serialized via pickle. By placing the unsafe deserialization enabling call first and the malicious Lambda layer second within the archive, the attacker can achieve arbitrary code execution on the system loading the model. This attack vector does not require network access but does require local or limited privileges to load a malicious model file. The vulnerability is rated with a CVSS 4.0 score of 8.6 (high severity), reflecting the significant impact on confidentiality, integrity, and availability, combined with relatively low attack complexity but requiring some privileges and user interaction (loading the model). No known exploits are currently reported in the wild, but the potential for exploitation is substantial given the widespread use of Keras in machine learning workflows. The vulnerability highlights the risks of deserializing untrusted data and the dangers of disabling built-in safety mechanisms in ML frameworks.

Potential Impact

For European organizations, this vulnerability poses a serious risk, especially those involved in AI research, development, and deployment using Keras. Successful exploitation could allow attackers to execute arbitrary code on systems that load malicious model files, potentially leading to data breaches, system compromise, or disruption of AI services. This could affect confidentiality by exposing sensitive data processed by AI models, integrity by altering model behavior or outputs, and availability by causing denial of service or system instability. Organizations relying on automated ML pipelines or sharing models across teams or with third parties are particularly vulnerable if they do not verify model provenance. The impact is heightened in sectors such as finance, healthcare, and critical infrastructure where AI models are increasingly integrated into decision-making processes. Additionally, the complexity of detecting such attacks is increased because malicious payloads are embedded within seemingly benign model files, complicating incident response and forensic analysis.

Mitigation Recommendations

To mitigate this vulnerability, European organizations should: 1) Avoid loading Keras models from untrusted or unauthenticated sources. 2) Implement strict validation and integrity checks (e.g., digital signatures) on all model files before loading. 3) Upgrade to patched versions of Keras once available; monitor vendor advisories for updates. 4) Restrict permissions of environments where models are loaded to minimize potential damage from code execution. 5) Disable or restrict the use of the Lambda layer feature in Keras if not absolutely necessary, or enforce strict code review policies for any Lambda layers used. 6) Employ runtime application self-protection (RASP) or endpoint detection and response (EDR) tools to monitor for suspicious behaviors triggered by model loading. 7) Educate data scientists and ML engineers about the risks of deserializing untrusted models and enforce secure model sharing practices. 8) Consider sandboxing model loading processes to contain potential exploits. These measures go beyond generic advice by focusing on the unique aspects of ML model handling and deserialization risks.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
Google
Date Reserved
2025-09-03T07:27:23.895Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 68cd127d2a8afe82184746e8

Added to database: 9/19/2025, 8:21:17 AM

Last enriched: 9/19/2025, 8:21:52 AM

Last updated: 9/19/2025, 9:27:32 AM

Views: 3

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats