CVE-2025-9905: CWE-913 Improper Control of Dynamically-Managed Code Resources in Keras-team Keras
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .h5/.hdf5 model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special .h5 archive file that uses the Lambda layer feature of keras which allows arbitrary Python code in the form of pickled code. The vulnerability comes from the fact that the safe_mode=True option is not honored when reading .h5 archives. Note that the .h5/.hdf5 format is a legacy format supported by Keras 3 for backwards compatibility.
AI Analysis
Technical Summary
CVE-2025-9905 is a high-severity vulnerability affecting Keras version 3.0.0, specifically targeting the Model.load_model method when loading legacy .h5/.hdf5 model files. The vulnerability arises because the safe_mode=True parameter, intended to restrict unsafe operations during model loading, is not properly enforced when processing these legacy HDF5 archives. Attackers can craft malicious .h5 files that exploit the Lambda layer feature in Keras, which allows embedding arbitrary Python code via pickled objects. When such a malicious model is loaded, the embedded pickled code executes, leading to arbitrary code execution within the context of the loading process. This flaw is categorized under CWE-913 (Improper Control of Dynamically-Managed Code Resources), highlighting the failure to securely manage dynamically loaded code resources. Although the vulnerability requires local access to load a malicious model file (attack vector: local), it demands high attack complexity and privileges (low privileges but some user interaction is needed). The CVSS 4.0 score of 7.3 reflects a high severity due to the potential for full compromise of the host system's confidentiality, integrity, and availability. No known exploits are currently reported in the wild, but the risk remains significant given the widespread use of Keras in machine learning workflows and the ease with which malicious models can be distributed or introduced into supply chains. The vulnerability is particularly relevant for environments that still rely on legacy .h5 model formats rather than the newer, safer formats supported by Keras 3.
Potential Impact
For European organizations, the impact of this vulnerability can be substantial, especially for those heavily invested in AI and machine learning workflows using Keras. Arbitrary code execution can lead to full system compromise, data theft, sabotage of AI models, or lateral movement within networks. Organizations in sectors such as finance, healthcare, automotive, and critical infrastructure that use Keras for predictive analytics, diagnostics, or autonomous systems are at heightened risk. The vulnerability could be exploited to implant backdoors, exfiltrate sensitive data, or disrupt AI-driven operations. Given the reliance on legacy .h5 models in some environments, the threat extends to organizations that have not fully migrated to newer model formats or have legacy systems integrated into their AI pipelines. The requirement for local access and user interaction somewhat limits remote exploitation but does not eliminate risk, especially in scenarios involving insider threats, compromised developer machines, or supply chain attacks where malicious models are introduced into trusted repositories or CI/CD pipelines.
Mitigation Recommendations
European organizations should immediately audit their use of Keras, identifying any reliance on the legacy .h5/.hdf5 model format. Transitioning to the newer model formats supported by Keras 3, which do not exhibit this vulnerability, is strongly recommended. Until migration is complete, organizations should implement strict controls on model file provenance, including cryptographic signing and verification of model files before loading. Restricting access to model loading functions to trusted users and environments, and employing sandboxing or containerization to isolate model loading processes, can reduce risk. Additionally, monitoring and logging model loading activities for anomalies can help detect exploitation attempts. Organizations should also update their incident response plans to include scenarios involving malicious AI model files. Since no patch is currently available, these compensating controls are critical. Finally, educating developers and data scientists about the risks of loading untrusted models and enforcing strict code review and model validation policies will help mitigate exploitation vectors.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Italy, Spain, Belgium, Switzerland
CVE-2025-9905: CWE-913 Improper Control of Dynamically-Managed Code Resources in Keras-team Keras
Description
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .h5/.hdf5 model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special .h5 archive file that uses the Lambda layer feature of keras which allows arbitrary Python code in the form of pickled code. The vulnerability comes from the fact that the safe_mode=True option is not honored when reading .h5 archives. Note that the .h5/.hdf5 format is a legacy format supported by Keras 3 for backwards compatibility.
AI-Powered Analysis
Technical Analysis
CVE-2025-9905 is a high-severity vulnerability affecting Keras version 3.0.0, specifically targeting the Model.load_model method when loading legacy .h5/.hdf5 model files. The vulnerability arises because the safe_mode=True parameter, intended to restrict unsafe operations during model loading, is not properly enforced when processing these legacy HDF5 archives. Attackers can craft malicious .h5 files that exploit the Lambda layer feature in Keras, which allows embedding arbitrary Python code via pickled objects. When such a malicious model is loaded, the embedded pickled code executes, leading to arbitrary code execution within the context of the loading process. This flaw is categorized under CWE-913 (Improper Control of Dynamically-Managed Code Resources), highlighting the failure to securely manage dynamically loaded code resources. Although the vulnerability requires local access to load a malicious model file (attack vector: local), it demands high attack complexity and privileges (low privileges but some user interaction is needed). The CVSS 4.0 score of 7.3 reflects a high severity due to the potential for full compromise of the host system's confidentiality, integrity, and availability. No known exploits are currently reported in the wild, but the risk remains significant given the widespread use of Keras in machine learning workflows and the ease with which malicious models can be distributed or introduced into supply chains. The vulnerability is particularly relevant for environments that still rely on legacy .h5 model formats rather than the newer, safer formats supported by Keras 3.
Potential Impact
For European organizations, the impact of this vulnerability can be substantial, especially for those heavily invested in AI and machine learning workflows using Keras. Arbitrary code execution can lead to full system compromise, data theft, sabotage of AI models, or lateral movement within networks. Organizations in sectors such as finance, healthcare, automotive, and critical infrastructure that use Keras for predictive analytics, diagnostics, or autonomous systems are at heightened risk. The vulnerability could be exploited to implant backdoors, exfiltrate sensitive data, or disrupt AI-driven operations. Given the reliance on legacy .h5 models in some environments, the threat extends to organizations that have not fully migrated to newer model formats or have legacy systems integrated into their AI pipelines. The requirement for local access and user interaction somewhat limits remote exploitation but does not eliminate risk, especially in scenarios involving insider threats, compromised developer machines, or supply chain attacks where malicious models are introduced into trusted repositories or CI/CD pipelines.
Mitigation Recommendations
European organizations should immediately audit their use of Keras, identifying any reliance on the legacy .h5/.hdf5 model format. Transitioning to the newer model formats supported by Keras 3, which do not exhibit this vulnerability, is strongly recommended. Until migration is complete, organizations should implement strict controls on model file provenance, including cryptographic signing and verification of model files before loading. Restricting access to model loading functions to trusted users and environments, and employing sandboxing or containerization to isolate model loading processes, can reduce risk. Additionally, monitoring and logging model loading activities for anomalies can help detect exploitation attempts. Organizations should also update their incident response plans to include scenarios involving malicious AI model files. Since no patch is currently available, these compensating controls are critical. Finally, educating developers and data scientists about the risks of loading untrusted models and enforcing strict code review and model validation policies will help mitigate exploitation vectors.
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- Date Reserved
- 2025-09-03T07:27:18.212Z
- Cvss Version
- 4.0
- State
- PUBLISHED
Threat ID: 68cd127d2a8afe82184746e4
Added to database: 9/19/2025, 8:21:17 AM
Last enriched: 9/27/2025, 12:56:47 AM
Last updated: 11/3/2025, 12:55:05 PM
Views: 89
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
CVE-2025-0987: CWE-639 Authorization Bypass Through User-Controlled Key in CB Project Ltd. Co. CVLand
CriticalGoogle Pays $100,000 in Rewards for Two Chrome Vulnerabilities
HighCVE-2025-48397: CWE-306 Missing Authentication for Critical Function in Eaton Eaton Brightlayer Software Suite (BLSS)
HighCVE-2025-48396: CWE-434 Unrestricted Upload of File with Dangerous Type in Eaton Eaton Brightlayer Software Suite (BLSS)
HighCVE-2025-12623: Authorization Bypass in fushengqian fuint
LowActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.