Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-9905: CWE-913 Improper Control of Dynamically-Managed Code Resources in Keras-team Keras

0
High
VulnerabilityCVE-2025-9905cvecve-2025-9905cwe-913
Published: Fri Sep 19 2025 (09/19/2025, 08:16:44 UTC)
Source: CVE Database V5
Vendor/Project: Keras-team
Product: Keras

Description

The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .h5/.hdf5 model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special .h5 archive file that uses the Lambda layer feature of keras which allows arbitrary Python code in the form of pickled code. The vulnerability comes from the fact that the safe_mode=True option is not honored when reading .h5 archives. Note that the .h5/.hdf5 format is a legacy format supported by Keras 3 for backwards compatibility.

AI-Powered Analysis

AILast updated: 09/27/2025, 00:56:47 UTC

Technical Analysis

CVE-2025-9905 is a high-severity vulnerability affecting Keras version 3.0.0, specifically targeting the Model.load_model method when loading legacy .h5/.hdf5 model files. The vulnerability arises because the safe_mode=True parameter, intended to restrict unsafe operations during model loading, is not properly enforced when processing these legacy HDF5 archives. Attackers can craft malicious .h5 files that exploit the Lambda layer feature in Keras, which allows embedding arbitrary Python code via pickled objects. When such a malicious model is loaded, the embedded pickled code executes, leading to arbitrary code execution within the context of the loading process. This flaw is categorized under CWE-913 (Improper Control of Dynamically-Managed Code Resources), highlighting the failure to securely manage dynamically loaded code resources. Although the vulnerability requires local access to load a malicious model file (attack vector: local), it demands high attack complexity and privileges (low privileges but some user interaction is needed). The CVSS 4.0 score of 7.3 reflects a high severity due to the potential for full compromise of the host system's confidentiality, integrity, and availability. No known exploits are currently reported in the wild, but the risk remains significant given the widespread use of Keras in machine learning workflows and the ease with which malicious models can be distributed or introduced into supply chains. The vulnerability is particularly relevant for environments that still rely on legacy .h5 model formats rather than the newer, safer formats supported by Keras 3.

Potential Impact

For European organizations, the impact of this vulnerability can be substantial, especially for those heavily invested in AI and machine learning workflows using Keras. Arbitrary code execution can lead to full system compromise, data theft, sabotage of AI models, or lateral movement within networks. Organizations in sectors such as finance, healthcare, automotive, and critical infrastructure that use Keras for predictive analytics, diagnostics, or autonomous systems are at heightened risk. The vulnerability could be exploited to implant backdoors, exfiltrate sensitive data, or disrupt AI-driven operations. Given the reliance on legacy .h5 models in some environments, the threat extends to organizations that have not fully migrated to newer model formats or have legacy systems integrated into their AI pipelines. The requirement for local access and user interaction somewhat limits remote exploitation but does not eliminate risk, especially in scenarios involving insider threats, compromised developer machines, or supply chain attacks where malicious models are introduced into trusted repositories or CI/CD pipelines.

Mitigation Recommendations

European organizations should immediately audit their use of Keras, identifying any reliance on the legacy .h5/.hdf5 model format. Transitioning to the newer model formats supported by Keras 3, which do not exhibit this vulnerability, is strongly recommended. Until migration is complete, organizations should implement strict controls on model file provenance, including cryptographic signing and verification of model files before loading. Restricting access to model loading functions to trusted users and environments, and employing sandboxing or containerization to isolate model loading processes, can reduce risk. Additionally, monitoring and logging model loading activities for anomalies can help detect exploitation attempts. Organizations should also update their incident response plans to include scenarios involving malicious AI model files. Since no patch is currently available, these compensating controls are critical. Finally, educating developers and data scientists about the risks of loading untrusted models and enforcing strict code review and model validation policies will help mitigate exploitation vectors.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
Google
Date Reserved
2025-09-03T07:27:18.212Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 68cd127d2a8afe82184746e4

Added to database: 9/19/2025, 8:21:17 AM

Last enriched: 9/27/2025, 12:56:47 AM

Last updated: 11/3/2025, 12:55:05 PM

Views: 89

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats