Skip to main content

CVE-2025-9905: CWE-913 Improper Control of Dynamically-Managed Code Resources in Keras-team Keras

High
VulnerabilityCVE-2025-9905cvecve-2025-9905cwe-913
Published: Fri Sep 19 2025 (09/19/2025, 08:16:44 UTC)
Source: CVE Database V5
Vendor/Project: Keras-team
Product: Keras

Description

The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .h5/.hdf5 model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special .h5 archive file that uses the Lambda layer feature of keras which allows arbitrary Python code in the form of pickled code. The vulnerability comes from the fact that the safe_mode=True option is not honored when reading .h5 archives. Note that the .h5/.hdf5 format is a legacy format supported by Keras 3 for backwards compatibility.

AI-Powered Analysis

AILast updated: 09/19/2025, 08:22:07 UTC

Technical Analysis

CVE-2025-9905 is a high-severity vulnerability affecting Keras version 3.0.0, specifically in the Model.load_model method when loading legacy .h5 or .hdf5 model files. The vulnerability arises because the safe_mode=True parameter, intended to restrict execution of arbitrary code during model loading, is not properly enforced. Attackers can craft malicious .h5 model archives that exploit the Lambda layer feature of Keras, which supports embedding arbitrary Python code via pickled objects. When such a specially crafted model is loaded, the embedded malicious code executes with the privileges of the user running the load_model function. This represents an improper control of dynamically-managed code resources (CWE-913), allowing arbitrary code execution (ACE). The vulnerability is particularly dangerous because it leverages a legacy file format (.h5/.hdf5) still supported for backward compatibility in Keras 3, meaning that legacy models remain a vector for exploitation. The CVSS 4.0 score of 7.3 reflects a high severity, with the attack vector being local (AV:L), requiring high attack complexity (AC:H), partial privileges (PR:L), and partial user interaction (UI:P). The impact on confidentiality, integrity, and availability is high, as arbitrary code execution can lead to full system compromise. No known exploits are currently reported in the wild, but the potential for exploitation exists, especially in environments where untrusted or third-party models are loaded without sufficient validation. This vulnerability underscores the risks of deserializing untrusted data and the challenges of safely managing dynamic code execution in machine learning frameworks.

Potential Impact

For European organizations, the impact of CVE-2025-9905 can be significant, especially those relying on Keras for machine learning workflows involving model loading from external or untrusted sources. Arbitrary code execution can lead to data breaches, system compromise, lateral movement within networks, and disruption of critical AI-driven services. Organizations in sectors such as finance, healthcare, manufacturing, and research that use Keras models for predictive analytics, automation, or decision-making could face operational disruptions and intellectual property theft. The vulnerability also poses risks to cloud environments and AI platforms hosted in Europe, where compromised models could be used as attack vectors to infiltrate broader infrastructure. Given the high confidentiality, integrity, and availability impact, exploitation could result in regulatory non-compliance under GDPR if personal data is exposed or manipulated. Additionally, the requirement for local access and partial user interaction means insider threats or compromised endpoints could be leveraged to trigger the exploit. The legacy nature of the .h5 format means organizations maintaining older models or pipelines are particularly at risk.

Mitigation Recommendations

To mitigate CVE-2025-9905, European organizations should: 1) Avoid loading Keras models from untrusted or unauthenticated sources, especially legacy .h5/.hdf5 files. 2) Transition to newer Keras model formats that do not rely on pickled code execution or disable support for legacy formats where feasible. 3) Implement strict validation and integrity checks (e.g., cryptographic signatures) on all model files before loading. 4) Apply the principle of least privilege to environments running Keras, limiting permissions to reduce impact if exploitation occurs. 5) Monitor and restrict user access to model loading functions, especially on shared or multi-tenant systems. 6) Employ runtime application self-protection (RASP) or endpoint detection and response (EDR) tools to detect anomalous code execution patterns during model loading. 7) Educate data scientists and developers about the risks of loading untrusted models and enforce secure coding practices. 8) Stay updated on patches or official fixes from the Keras team and apply them promptly once available. Since no patch links are currently provided, organizations should consider temporary workarounds such as disabling Lambda layers or sandboxing model loading processes.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
Google
Date Reserved
2025-09-03T07:27:18.212Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 68cd127d2a8afe82184746e4

Added to database: 9/19/2025, 8:21:17 AM

Last enriched: 9/19/2025, 8:22:07 AM

Last updated: 9/19/2025, 9:27:32 AM

Views: 3

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats