CVE-2025-9905: CWE-913 Improper Control of Dynamically-Managed Code Resources in Keras-team Keras
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .h5/.hdf5 model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special .h5 archive file that uses the Lambda layer feature of keras which allows arbitrary Python code in the form of pickled code. The vulnerability comes from the fact that the safe_mode=True option is not honored when reading .h5 archives. Note that the .h5/.hdf5 format is a legacy format supported by Keras 3 for backwards compatibility.
AI Analysis
Technical Summary
CVE-2025-9905 is a high-severity vulnerability affecting Keras version 3.0.0, specifically in the Model.load_model method when loading legacy .h5 or .hdf5 model files. The vulnerability arises because the safe_mode=True parameter, intended to restrict execution of arbitrary code during model loading, is not properly enforced. Attackers can craft malicious .h5 model archives that exploit the Lambda layer feature of Keras, which supports embedding arbitrary Python code via pickled objects. When such a specially crafted model is loaded, the embedded malicious code executes with the privileges of the user running the load_model function. This represents an improper control of dynamically-managed code resources (CWE-913), allowing arbitrary code execution (ACE). The vulnerability is particularly dangerous because it leverages a legacy file format (.h5/.hdf5) still supported for backward compatibility in Keras 3, meaning that legacy models remain a vector for exploitation. The CVSS 4.0 score of 7.3 reflects a high severity, with the attack vector being local (AV:L), requiring high attack complexity (AC:H), partial privileges (PR:L), and partial user interaction (UI:P). The impact on confidentiality, integrity, and availability is high, as arbitrary code execution can lead to full system compromise. No known exploits are currently reported in the wild, but the potential for exploitation exists, especially in environments where untrusted or third-party models are loaded without sufficient validation. This vulnerability underscores the risks of deserializing untrusted data and the challenges of safely managing dynamic code execution in machine learning frameworks.
Potential Impact
For European organizations, the impact of CVE-2025-9905 can be significant, especially those relying on Keras for machine learning workflows involving model loading from external or untrusted sources. Arbitrary code execution can lead to data breaches, system compromise, lateral movement within networks, and disruption of critical AI-driven services. Organizations in sectors such as finance, healthcare, manufacturing, and research that use Keras models for predictive analytics, automation, or decision-making could face operational disruptions and intellectual property theft. The vulnerability also poses risks to cloud environments and AI platforms hosted in Europe, where compromised models could be used as attack vectors to infiltrate broader infrastructure. Given the high confidentiality, integrity, and availability impact, exploitation could result in regulatory non-compliance under GDPR if personal data is exposed or manipulated. Additionally, the requirement for local access and partial user interaction means insider threats or compromised endpoints could be leveraged to trigger the exploit. The legacy nature of the .h5 format means organizations maintaining older models or pipelines are particularly at risk.
Mitigation Recommendations
To mitigate CVE-2025-9905, European organizations should: 1) Avoid loading Keras models from untrusted or unauthenticated sources, especially legacy .h5/.hdf5 files. 2) Transition to newer Keras model formats that do not rely on pickled code execution or disable support for legacy formats where feasible. 3) Implement strict validation and integrity checks (e.g., cryptographic signatures) on all model files before loading. 4) Apply the principle of least privilege to environments running Keras, limiting permissions to reduce impact if exploitation occurs. 5) Monitor and restrict user access to model loading functions, especially on shared or multi-tenant systems. 6) Employ runtime application self-protection (RASP) or endpoint detection and response (EDR) tools to detect anomalous code execution patterns during model loading. 7) Educate data scientists and developers about the risks of loading untrusted models and enforce secure coding practices. 8) Stay updated on patches or official fixes from the Keras team and apply them promptly once available. Since no patch links are currently provided, organizations should consider temporary workarounds such as disabling Lambda layers or sandboxing model loading processes.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Switzerland, Italy
CVE-2025-9905: CWE-913 Improper Control of Dynamically-Managed Code Resources in Keras-team Keras
Description
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .h5/.hdf5 model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special .h5 archive file that uses the Lambda layer feature of keras which allows arbitrary Python code in the form of pickled code. The vulnerability comes from the fact that the safe_mode=True option is not honored when reading .h5 archives. Note that the .h5/.hdf5 format is a legacy format supported by Keras 3 for backwards compatibility.
AI-Powered Analysis
Technical Analysis
CVE-2025-9905 is a high-severity vulnerability affecting Keras version 3.0.0, specifically in the Model.load_model method when loading legacy .h5 or .hdf5 model files. The vulnerability arises because the safe_mode=True parameter, intended to restrict execution of arbitrary code during model loading, is not properly enforced. Attackers can craft malicious .h5 model archives that exploit the Lambda layer feature of Keras, which supports embedding arbitrary Python code via pickled objects. When such a specially crafted model is loaded, the embedded malicious code executes with the privileges of the user running the load_model function. This represents an improper control of dynamically-managed code resources (CWE-913), allowing arbitrary code execution (ACE). The vulnerability is particularly dangerous because it leverages a legacy file format (.h5/.hdf5) still supported for backward compatibility in Keras 3, meaning that legacy models remain a vector for exploitation. The CVSS 4.0 score of 7.3 reflects a high severity, with the attack vector being local (AV:L), requiring high attack complexity (AC:H), partial privileges (PR:L), and partial user interaction (UI:P). The impact on confidentiality, integrity, and availability is high, as arbitrary code execution can lead to full system compromise. No known exploits are currently reported in the wild, but the potential for exploitation exists, especially in environments where untrusted or third-party models are loaded without sufficient validation. This vulnerability underscores the risks of deserializing untrusted data and the challenges of safely managing dynamic code execution in machine learning frameworks.
Potential Impact
For European organizations, the impact of CVE-2025-9905 can be significant, especially those relying on Keras for machine learning workflows involving model loading from external or untrusted sources. Arbitrary code execution can lead to data breaches, system compromise, lateral movement within networks, and disruption of critical AI-driven services. Organizations in sectors such as finance, healthcare, manufacturing, and research that use Keras models for predictive analytics, automation, or decision-making could face operational disruptions and intellectual property theft. The vulnerability also poses risks to cloud environments and AI platforms hosted in Europe, where compromised models could be used as attack vectors to infiltrate broader infrastructure. Given the high confidentiality, integrity, and availability impact, exploitation could result in regulatory non-compliance under GDPR if personal data is exposed or manipulated. Additionally, the requirement for local access and partial user interaction means insider threats or compromised endpoints could be leveraged to trigger the exploit. The legacy nature of the .h5 format means organizations maintaining older models or pipelines are particularly at risk.
Mitigation Recommendations
To mitigate CVE-2025-9905, European organizations should: 1) Avoid loading Keras models from untrusted or unauthenticated sources, especially legacy .h5/.hdf5 files. 2) Transition to newer Keras model formats that do not rely on pickled code execution or disable support for legacy formats where feasible. 3) Implement strict validation and integrity checks (e.g., cryptographic signatures) on all model files before loading. 4) Apply the principle of least privilege to environments running Keras, limiting permissions to reduce impact if exploitation occurs. 5) Monitor and restrict user access to model loading functions, especially on shared or multi-tenant systems. 6) Employ runtime application self-protection (RASP) or endpoint detection and response (EDR) tools to detect anomalous code execution patterns during model loading. 7) Educate data scientists and developers about the risks of loading untrusted models and enforce secure coding practices. 8) Stay updated on patches or official fixes from the Keras team and apply them promptly once available. Since no patch links are currently provided, organizations should consider temporary workarounds such as disabling Lambda layers or sandboxing model loading processes.
Affected Countries
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- Date Reserved
- 2025-09-03T07:27:18.212Z
- Cvss Version
- 4.0
- State
- PUBLISHED
Threat ID: 68cd127d2a8afe82184746e4
Added to database: 9/19/2025, 8:21:17 AM
Last enriched: 9/19/2025, 8:22:07 AM
Last updated: 9/19/2025, 9:27:32 AM
Views: 3
Related Threats
CVE-2025-10719: CWE-639 Authorization Bypass Through User-Controlled Key in WisdomGarden Tronclass
MediumCVE-2025-8531: CWE-130 Improper Handling of Length Parameter Inconsistency in Mitsubishi Electric Corporation MELSEC-Q Series Q03UDVCPU
MediumCVE-2025-9906: CWE-502 Deserialization of Untrusted Data in Keras-team Keras
HighCVE-2025-7403: Write-what-where Condition in zephyrproject-rtos Zephyr
HighCVE-2025-10458: Improper Handling of Length Parameter Inconsistency in zephyrproject-rtos Zephyr
HighActions
Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.