CVE-2025-9905: CWE-913 Improper Control of Dynamically-Managed Code Resources in Keras-team Keras
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .h5/.hdf5 model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special .h5 archive file that uses the Lambda layer feature of keras which allows arbitrary Python code in the form of pickled code. The vulnerability comes from the fact that the safe_mode=True option is not honored when reading .h5 archives. Note that the .h5/.hdf5 format is a legacy format supported by Keras 3 for backwards compatibility.
AI Analysis
Technical Summary
CVE-2025-9905 is a vulnerability classified under CWE-913 (Improper Control of Dynamically-Managed Code Resources) affecting the Keras deep learning framework, specifically version 3.0.0. The issue arises in the Model.load_model method when loading legacy .h5 or .hdf5 model files, which are archives that can contain Keras model architectures and weights. The vulnerability exploits the Lambda layer feature in Keras, which permits embedding arbitrary Python code serialized via pickling. An attacker can craft a malicious .h5 archive containing a Lambda layer with pickled code that executes arbitrary commands upon loading. Although Keras 3 introduced a safe_mode=True parameter intended to restrict such code execution, this option is not enforced when loading legacy .h5 files, allowing the malicious payload to run. Exploitation requires the victim to load the crafted model file, which implies local access or user interaction. The vulnerability can lead to arbitrary code execution with limited privileges, potentially compromising the host system's confidentiality, integrity, and availability. The CVSS 4.0 score of 7.3 reflects high severity, considering the attack vector is local, requires high attack complexity, partial privileges, and user interaction, but results in high impact across all security properties. No patches or known exploits are currently reported, but the vulnerability poses a significant risk to organizations using Keras 3.0.0 with legacy model files, especially in environments where models from untrusted sources might be loaded. This issue underscores the risks of deserializing untrusted data and the challenges of maintaining backward compatibility with legacy formats.
Potential Impact
The vulnerability enables arbitrary code execution on systems running Keras 3.0.0 when loading maliciously crafted legacy .h5 model files. This can lead to full compromise of the affected system's confidentiality, integrity, and availability. Organizations relying on Keras for machine learning workflows, especially those that load models from external or untrusted sources, face risks of malware deployment, data theft, or disruption of AI services. The attack requires local access and user interaction, limiting remote exploitation but still posing significant insider threat or supply chain risks. The legacy .h5 format's continued support increases the attack surface, particularly in environments slow to migrate to newer formats. The impact extends to AI research labs, enterprises deploying ML models in production, and cloud services offering ML capabilities. Compromise could result in unauthorized data access, model manipulation, or service outages, undermining trust in AI systems and causing operational and reputational damage.
Mitigation Recommendations
1. Avoid loading .h5 or .hdf5 model files from untrusted or unknown sources, especially those containing Lambda layers. 2. Migrate to newer Keras model formats (e.g., SavedModel format) that do not rely on pickled code and are not affected by this vulnerability. 3. Implement strict validation and sandboxing of model files before loading them in production environments. 4. Restrict user permissions and isolate environments where models are loaded to limit potential damage from exploitation. 5. Monitor for suspicious activity related to model loading and execution, including unexpected process spawning or network connections. 6. Stay updated with Keras-team advisories and apply patches or updates addressing this vulnerability once released. 7. Educate developers and data scientists on the risks of deserializing untrusted model files and enforce secure ML development practices. 8. Consider disabling or restricting the use of Lambda layers in models unless absolutely necessary and verified safe.
Affected Countries
United States, China, Germany, Japan, South Korea, United Kingdom, France, Canada, India, Australia, Netherlands, Singapore
CVE-2025-9905: CWE-913 Improper Control of Dynamically-Managed Code Resources in Keras-team Keras
Description
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .h5/.hdf5 model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special .h5 archive file that uses the Lambda layer feature of keras which allows arbitrary Python code in the form of pickled code. The vulnerability comes from the fact that the safe_mode=True option is not honored when reading .h5 archives. Note that the .h5/.hdf5 format is a legacy format supported by Keras 3 for backwards compatibility.
AI-Powered Analysis
Machine-generated threat intelligence
Technical Analysis
CVE-2025-9905 is a vulnerability classified under CWE-913 (Improper Control of Dynamically-Managed Code Resources) affecting the Keras deep learning framework, specifically version 3.0.0. The issue arises in the Model.load_model method when loading legacy .h5 or .hdf5 model files, which are archives that can contain Keras model architectures and weights. The vulnerability exploits the Lambda layer feature in Keras, which permits embedding arbitrary Python code serialized via pickling. An attacker can craft a malicious .h5 archive containing a Lambda layer with pickled code that executes arbitrary commands upon loading. Although Keras 3 introduced a safe_mode=True parameter intended to restrict such code execution, this option is not enforced when loading legacy .h5 files, allowing the malicious payload to run. Exploitation requires the victim to load the crafted model file, which implies local access or user interaction. The vulnerability can lead to arbitrary code execution with limited privileges, potentially compromising the host system's confidentiality, integrity, and availability. The CVSS 4.0 score of 7.3 reflects high severity, considering the attack vector is local, requires high attack complexity, partial privileges, and user interaction, but results in high impact across all security properties. No patches or known exploits are currently reported, but the vulnerability poses a significant risk to organizations using Keras 3.0.0 with legacy model files, especially in environments where models from untrusted sources might be loaded. This issue underscores the risks of deserializing untrusted data and the challenges of maintaining backward compatibility with legacy formats.
Potential Impact
The vulnerability enables arbitrary code execution on systems running Keras 3.0.0 when loading maliciously crafted legacy .h5 model files. This can lead to full compromise of the affected system's confidentiality, integrity, and availability. Organizations relying on Keras for machine learning workflows, especially those that load models from external or untrusted sources, face risks of malware deployment, data theft, or disruption of AI services. The attack requires local access and user interaction, limiting remote exploitation but still posing significant insider threat or supply chain risks. The legacy .h5 format's continued support increases the attack surface, particularly in environments slow to migrate to newer formats. The impact extends to AI research labs, enterprises deploying ML models in production, and cloud services offering ML capabilities. Compromise could result in unauthorized data access, model manipulation, or service outages, undermining trust in AI systems and causing operational and reputational damage.
Mitigation Recommendations
1. Avoid loading .h5 or .hdf5 model files from untrusted or unknown sources, especially those containing Lambda layers. 2. Migrate to newer Keras model formats (e.g., SavedModel format) that do not rely on pickled code and are not affected by this vulnerability. 3. Implement strict validation and sandboxing of model files before loading them in production environments. 4. Restrict user permissions and isolate environments where models are loaded to limit potential damage from exploitation. 5. Monitor for suspicious activity related to model loading and execution, including unexpected process spawning or network connections. 6. Stay updated with Keras-team advisories and apply patches or updates addressing this vulnerability once released. 7. Educate developers and data scientists on the risks of deserializing untrusted model files and enforce secure ML development practices. 8. Consider disabling or restricting the use of Lambda layers in models unless absolutely necessary and verified safe.
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- Date Reserved
- 2025-09-03T07:27:18.212Z
- Cvss Version
- 4.0
- State
- PUBLISHED
Threat ID: 68cd127d2a8afe82184746e4
Added to database: 9/19/2025, 8:21:17 AM
Last enriched: 2/27/2026, 4:31:51 AM
Last updated: 3/24/2026, 2:21:01 PM
Views: 160
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
Need more coverage?
Upgrade to Pro Console for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.
Latest Threats
Check if your credentials are on the dark web
Instant breach scanning across billions of leaked records. Free tier available.