Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-9906: CWE-502 Deserialization of Untrusted Data in Keras-team Keras

0
High
VulnerabilityCVE-2025-9906cvecve-2025-9906cwe-502
Published: Fri Sep 19 2025 (09/19/2025, 08:15:04 UTC)
Source: CVE Database V5
Vendor/Project: Keras-team
Product: Keras

Description

The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .keras model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special config.json (a file within the .keras archive) that will invoke keras.config.enable_unsafe_deserialization() to disable safe mode. Once safe mode is disable, one can use the Lambda layer feature of keras, which allows arbitrary Python code in the form of pickled code. Both can appear in the same archive. Simply the keras.config.enable_unsafe_deserialization() needs to appear first in the archive and the Lambda with arbitrary code needs to be second.

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 02/27/2026, 04:32:06 UTC

Technical Analysis

CVE-2025-9906 is a deserialization vulnerability (CWE-502) in the Keras deep learning framework, specifically affecting version 3.0.0. The vulnerability arises from the Model.load_model method, which loads machine learning models stored in .keras archive files. These archives contain a config.json file and other serialized components. An attacker can craft a malicious .keras archive with a config.json that programmatically calls keras.config.enable_unsafe_deserialization(), which disables the built-in safe mode designed to prevent unsafe code execution during deserialization. Once safe mode is disabled, the attacker can embed a Lambda layer containing arbitrary pickled Python code, which is executed upon model loading. This chain allows arbitrary code execution on the host system with the privileges of the user running the load_model method. The attack requires the victim to load the malicious model, implying some user interaction and privileges are necessary. The vulnerability is rated high severity with a CVSS 8.6, reflecting the significant impact on confidentiality, integrity, and availability, combined with moderate attack complexity and required privileges. No patches or fixes are currently linked, and no exploits have been observed in the wild, but the risk is substantial given Keras’s popularity in AI/ML development environments.

Potential Impact

The impact of CVE-2025-9906 is severe for organizations relying on Keras 3.0.0 for machine learning model deployment and development. Successful exploitation leads to arbitrary code execution, potentially allowing attackers to execute malicious commands, install malware, exfiltrate sensitive data, or disrupt services. This compromises confidentiality, integrity, and availability of affected systems. Since Keras is widely used in research, enterprise AI applications, and cloud-based ML pipelines, the vulnerability could be leveraged to pivot into broader network compromise or data breaches. The requirement for user interaction and some privileges limits remote exploitation but does not eliminate risk, especially in environments where untrusted models might be loaded or shared. The lack of patches increases exposure time. Organizations may face operational disruption, intellectual property theft, and regulatory compliance issues if exploited.

Mitigation Recommendations

To mitigate CVE-2025-9906, organizations should: 1) Avoid loading untrusted or unauthenticated .keras model files, especially from external or unknown sources. 2) Implement strict access controls and code execution policies around environments where Keras models are loaded. 3) Monitor and restrict the use of the Model.load_model method to trusted personnel and automated systems only. 4) Use containerization or sandboxing to isolate model loading processes, limiting potential damage from arbitrary code execution. 5) Stay alert for official patches or updates from the Keras team and apply them promptly once available. 6) Consider static or dynamic analysis of model archives before loading to detect suspicious config.json or Lambda layers. 7) Educate developers and data scientists about the risks of deserializing untrusted data in ML workflows. 8) Employ runtime monitoring and endpoint detection to identify anomalous behavior indicative of exploitation attempts. These steps go beyond generic advice by focusing on operational controls and proactive detection tailored to this vulnerability’s exploitation vector.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Data Version
5.1
Assigner Short Name
Google
Date Reserved
2025-09-03T07:27:23.895Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 68cd127d2a8afe82184746e8

Added to database: 9/19/2025, 8:21:17 AM

Last enriched: 2/27/2026, 4:32:06 AM

Last updated: 3/25/2026, 4:31:06 AM

Views: 248

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses