CVE-2025-9906: CWE-502 Deserialization of Untrusted Data in Keras-team Keras
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .keras model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special config.json (a file within the .keras archive) that will invoke keras.config.enable_unsafe_deserialization() to disable safe mode. Once safe mode is disable, one can use the Lambda layer feature of keras, which allows arbitrary Python code in the form of pickled code. Both can appear in the same archive. Simply the keras.config.enable_unsafe_deserialization() needs to appear first in the archive and the Lambda with arbitrary code needs to be second.
AI Analysis
Technical Summary
CVE-2025-9906 is a critical deserialization vulnerability (CWE-502) found in version 3.0.0 of the Keras deep learning framework. The vulnerability arises in the Model.load_model method, which is designed to load saved Keras models from .keras archive files. Normally, Keras attempts to mitigate risks by enabling a safe_mode that restricts unsafe deserialization. However, this vulnerability allows an attacker to craft a malicious .keras model archive containing a specially constructed config.json file that calls keras.config.enable_unsafe_deserialization(), effectively disabling safe_mode. Once safe_mode is disabled, the attacker can exploit the Lambda layer feature, which permits embedding arbitrary Python code serialized via pickle. By placing the unsafe deserialization enabling call first and the malicious Lambda layer second within the archive, the attacker can achieve arbitrary code execution upon loading the model. This exploit requires the victim to load the malicious model using Model.load_model, which may occur in environments where models are shared or downloaded from untrusted sources. The vulnerability has a high CVSS 4.0 score of 8.6, reflecting its significant impact and complexity. Although no known exploits are reported in the wild yet, the ease of embedding malicious code in model files and the widespread use of Keras in machine learning pipelines make this a serious threat. The vulnerability affects confidentiality, integrity, and availability, as arbitrary code execution can lead to data theft, system compromise, or denial of service. The attack requires local or network access with low privileges and some user interaction (loading the model), but no authentication is needed beyond that. This vulnerability highlights the risks of deserializing untrusted data in ML frameworks and the importance of strict validation and sandboxing when loading models.
Potential Impact
For European organizations, the impact of CVE-2025-9906 can be substantial, especially those relying on Keras for machine learning workflows in critical sectors such as finance, healthcare, manufacturing, and research. Arbitrary code execution can lead to unauthorized access to sensitive data, manipulation of ML models causing incorrect predictions or decisions, and disruption of services dependent on AI models. Organizations using shared or third-party models are particularly at risk, as attackers can distribute malicious models to compromise internal systems. The breach of confidentiality could expose personal data protected under GDPR, leading to regulatory penalties and reputational damage. Integrity attacks on ML models may result in flawed analytics or automated decisions, impacting business operations and safety-critical applications. Availability may also be affected if attackers deploy ransomware or disrupt ML services. Given the increasing integration of AI in European industries, this vulnerability poses a threat to both operational security and compliance with data protection laws.
Mitigation Recommendations
To mitigate this vulnerability, European organizations should: 1) Immediately update Keras to a patched version once available, as no patch links are currently provided but are expected soon. 2) Avoid loading Keras models from untrusted or unauthenticated sources; implement strict validation and integrity checks (e.g., digital signatures) on model files before loading. 3) Disable or restrict the use of Lambda layers in models, especially those sourced externally, as they allow arbitrary code execution. 4) Employ runtime sandboxing or containerization for environments where models are loaded to limit the impact of potential exploits. 5) Monitor and audit model loading activities and system logs for suspicious behavior indicative of exploitation attempts. 6) Educate data scientists and ML engineers about the risks of deserializing untrusted models and enforce secure development practices. 7) Implement network segmentation to isolate ML infrastructure from critical systems to reduce lateral movement in case of compromise. These measures go beyond generic advice by focusing on secure model provenance, runtime containment, and operational monitoring specific to ML workflows.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Italy, Spain
CVE-2025-9906: CWE-502 Deserialization of Untrusted Data in Keras-team Keras
Description
The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .keras model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special config.json (a file within the .keras archive) that will invoke keras.config.enable_unsafe_deserialization() to disable safe mode. Once safe mode is disable, one can use the Lambda layer feature of keras, which allows arbitrary Python code in the form of pickled code. Both can appear in the same archive. Simply the keras.config.enable_unsafe_deserialization() needs to appear first in the archive and the Lambda with arbitrary code needs to be second.
AI-Powered Analysis
Technical Analysis
CVE-2025-9906 is a critical deserialization vulnerability (CWE-502) found in version 3.0.0 of the Keras deep learning framework. The vulnerability arises in the Model.load_model method, which is designed to load saved Keras models from .keras archive files. Normally, Keras attempts to mitigate risks by enabling a safe_mode that restricts unsafe deserialization. However, this vulnerability allows an attacker to craft a malicious .keras model archive containing a specially constructed config.json file that calls keras.config.enable_unsafe_deserialization(), effectively disabling safe_mode. Once safe_mode is disabled, the attacker can exploit the Lambda layer feature, which permits embedding arbitrary Python code serialized via pickle. By placing the unsafe deserialization enabling call first and the malicious Lambda layer second within the archive, the attacker can achieve arbitrary code execution upon loading the model. This exploit requires the victim to load the malicious model using Model.load_model, which may occur in environments where models are shared or downloaded from untrusted sources. The vulnerability has a high CVSS 4.0 score of 8.6, reflecting its significant impact and complexity. Although no known exploits are reported in the wild yet, the ease of embedding malicious code in model files and the widespread use of Keras in machine learning pipelines make this a serious threat. The vulnerability affects confidentiality, integrity, and availability, as arbitrary code execution can lead to data theft, system compromise, or denial of service. The attack requires local or network access with low privileges and some user interaction (loading the model), but no authentication is needed beyond that. This vulnerability highlights the risks of deserializing untrusted data in ML frameworks and the importance of strict validation and sandboxing when loading models.
Potential Impact
For European organizations, the impact of CVE-2025-9906 can be substantial, especially those relying on Keras for machine learning workflows in critical sectors such as finance, healthcare, manufacturing, and research. Arbitrary code execution can lead to unauthorized access to sensitive data, manipulation of ML models causing incorrect predictions or decisions, and disruption of services dependent on AI models. Organizations using shared or third-party models are particularly at risk, as attackers can distribute malicious models to compromise internal systems. The breach of confidentiality could expose personal data protected under GDPR, leading to regulatory penalties and reputational damage. Integrity attacks on ML models may result in flawed analytics or automated decisions, impacting business operations and safety-critical applications. Availability may also be affected if attackers deploy ransomware or disrupt ML services. Given the increasing integration of AI in European industries, this vulnerability poses a threat to both operational security and compliance with data protection laws.
Mitigation Recommendations
To mitigate this vulnerability, European organizations should: 1) Immediately update Keras to a patched version once available, as no patch links are currently provided but are expected soon. 2) Avoid loading Keras models from untrusted or unauthenticated sources; implement strict validation and integrity checks (e.g., digital signatures) on model files before loading. 3) Disable or restrict the use of Lambda layers in models, especially those sourced externally, as they allow arbitrary code execution. 4) Employ runtime sandboxing or containerization for environments where models are loaded to limit the impact of potential exploits. 5) Monitor and audit model loading activities and system logs for suspicious behavior indicative of exploitation attempts. 6) Educate data scientists and ML engineers about the risks of deserializing untrusted models and enforce secure development practices. 7) Implement network segmentation to isolate ML infrastructure from critical systems to reduce lateral movement in case of compromise. These measures go beyond generic advice by focusing on secure model provenance, runtime containment, and operational monitoring specific to ML workflows.
Affected Countries
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- Date Reserved
- 2025-09-03T07:27:23.895Z
- Cvss Version
- 4.0
- State
- PUBLISHED
Threat ID: 68cd127d2a8afe82184746e8
Added to database: 9/19/2025, 8:21:17 AM
Last enriched: 9/27/2025, 12:57:02 AM
Last updated: 11/3/2025, 2:54:10 PM
Views: 122
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
CVE-2025-8900: CWE-269 Improper Privilege Management in dreamstechnologies Doccure Core
CriticalRondoDox v2: When an IoT Botnet Goes Enterprise-Ready
HighCVE-2025-12626: Path Traversal in jeecgboot jeewx-boot
MediumCVE-2025-64294: CWE-862 Missing Authorization in d3wp WP Snow Effect
MediumCVE-2025-0987: CWE-639 Authorization Bypass Through User-Controlled Key in CB Project Ltd. Co. CVLand
CriticalActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.