Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-9905: CWE-913 Improper Control of Dynamically-Managed Code Resources in Keras-team Keras

0
High
VulnerabilityCVE-2025-9905cvecve-2025-9905cwe-913
Published: Fri Sep 19 2025 (09/19/2025, 08:16:44 UTC)
Source: CVE Database V5
Vendor/Project: Keras-team
Product: Keras

Description

The Keras Model.load_model method can be exploited to achieve arbitrary code execution, even with safe_mode=True. One can create a specially crafted .h5/.hdf5 model archive that, when loaded via Model.load_model, will trigger arbitrary code to be executed. This is achieved by crafting a special .h5 archive file that uses the Lambda layer feature of keras which allows arbitrary Python code in the form of pickled code. The vulnerability comes from the fact that the safe_mode=True option is not honored when reading .h5 archives. Note that the .h5/.hdf5 format is a legacy format supported by Keras 3 for backwards compatibility.

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 02/27/2026, 04:31:51 UTC

Technical Analysis

CVE-2025-9905 is a vulnerability classified under CWE-913 (Improper Control of Dynamically-Managed Code Resources) affecting the Keras deep learning framework, specifically version 3.0.0. The issue arises in the Model.load_model method when loading legacy .h5 or .hdf5 model files, which are archives that can contain Keras model architectures and weights. The vulnerability exploits the Lambda layer feature in Keras, which permits embedding arbitrary Python code serialized via pickling. An attacker can craft a malicious .h5 archive containing a Lambda layer with pickled code that executes arbitrary commands upon loading. Although Keras 3 introduced a safe_mode=True parameter intended to restrict such code execution, this option is not enforced when loading legacy .h5 files, allowing the malicious payload to run. Exploitation requires the victim to load the crafted model file, which implies local access or user interaction. The vulnerability can lead to arbitrary code execution with limited privileges, potentially compromising the host system's confidentiality, integrity, and availability. The CVSS 4.0 score of 7.3 reflects high severity, considering the attack vector is local, requires high attack complexity, partial privileges, and user interaction, but results in high impact across all security properties. No patches or known exploits are currently reported, but the vulnerability poses a significant risk to organizations using Keras 3.0.0 with legacy model files, especially in environments where models from untrusted sources might be loaded. This issue underscores the risks of deserializing untrusted data and the challenges of maintaining backward compatibility with legacy formats.

Potential Impact

The vulnerability enables arbitrary code execution on systems running Keras 3.0.0 when loading maliciously crafted legacy .h5 model files. This can lead to full compromise of the affected system's confidentiality, integrity, and availability. Organizations relying on Keras for machine learning workflows, especially those that load models from external or untrusted sources, face risks of malware deployment, data theft, or disruption of AI services. The attack requires local access and user interaction, limiting remote exploitation but still posing significant insider threat or supply chain risks. The legacy .h5 format's continued support increases the attack surface, particularly in environments slow to migrate to newer formats. The impact extends to AI research labs, enterprises deploying ML models in production, and cloud services offering ML capabilities. Compromise could result in unauthorized data access, model manipulation, or service outages, undermining trust in AI systems and causing operational and reputational damage.

Mitigation Recommendations

1. Avoid loading .h5 or .hdf5 model files from untrusted or unknown sources, especially those containing Lambda layers. 2. Migrate to newer Keras model formats (e.g., SavedModel format) that do not rely on pickled code and are not affected by this vulnerability. 3. Implement strict validation and sandboxing of model files before loading them in production environments. 4. Restrict user permissions and isolate environments where models are loaded to limit potential damage from exploitation. 5. Monitor for suspicious activity related to model loading and execution, including unexpected process spawning or network connections. 6. Stay updated with Keras-team advisories and apply patches or updates addressing this vulnerability once released. 7. Educate developers and data scientists on the risks of deserializing untrusted model files and enforce secure ML development practices. 8. Consider disabling or restricting the use of Lambda layers in models unless absolutely necessary and verified safe.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Data Version
5.1
Assigner Short Name
Google
Date Reserved
2025-09-03T07:27:18.212Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 68cd127d2a8afe82184746e4

Added to database: 9/19/2025, 8:21:17 AM

Last enriched: 2/27/2026, 4:31:51 AM

Last updated: 3/24/2026, 2:21:01 PM

Views: 160

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses