Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-8747: CWE-502 Deserialization of Untrusted Data in Google Keras

0
High
VulnerabilityCVE-2025-8747cvecve-2025-8747cwe-502
Published: Mon Aug 11 2025 (08/11/2025, 07:21:16 UTC)
Source: CVE Database V5
Vendor/Project: Google
Product: Keras

Description

A safe mode bypass vulnerability in the `Model.load_model` method in Keras versions 3.0.0 through 3.10.0 allows an attacker to achieve arbitrary code execution by convincing a user to load a specially crafted `.keras` model archive.

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 02/27/2026, 04:24:43 UTC

Technical Analysis

CVE-2025-8747 is a deserialization vulnerability classified under CWE-502 that affects the Model.load_model method in Google Keras versions 3.0.0 through 3.10.0. The vulnerability arises because the method does not adequately enforce safe mode restrictions when loading .keras model archives, allowing attackers to embed malicious payloads within these archives. When a user loads such a crafted model, the deserialization process executes attacker-controlled code, leading to arbitrary code execution on the host system. The vulnerability requires the attacker to convince a user to load a malicious model file, implying user interaction is necessary. The CVSS 4.0 base score is 8.6 (high severity), reflecting the vulnerability's potential for significant impact on confidentiality, integrity, and availability, despite requiring low privileges and user interaction. This flaw is particularly critical given Keras's widespread use in machine learning workflows across academia and industry, where models are frequently shared and deployed. No patches or known exploits are currently available, but the risk remains substantial due to the ease of embedding malicious code in model files and the potential for attackers to compromise systems running vulnerable Keras versions.

Potential Impact

The impact of CVE-2025-8747 is substantial for organizations using Keras for machine learning model development and deployment. Successful exploitation enables arbitrary code execution, which can lead to full system compromise, data theft, or disruption of machine learning services. This threatens the confidentiality of sensitive data processed by ML models, the integrity of model outputs, and the availability of AI-driven applications. Organizations relying on automated model loading or sharing models across teams or with external partners are particularly vulnerable. The attack vector requires user interaction but can be executed with low privileges, increasing the likelihood of exploitation in environments where users may inadvertently load untrusted models. The vulnerability could also be leveraged in supply chain attacks targeting AI workflows, amplifying its impact. Given the growing reliance on AI and ML in critical sectors such as finance, healthcare, and autonomous systems, the potential damage from exploitation is significant and could disrupt business operations and erode trust in AI systems.

Mitigation Recommendations

To mitigate CVE-2025-8747, organizations should implement strict controls on the sources of .keras model files, ensuring only trusted and verified models are loaded. Employing digital signatures or cryptographic hashes to validate model integrity before loading can prevent malicious models from being accepted. Users should be trained to recognize the risks of loading models from untrusted sources and to avoid executing unverified code embedded in model files. Until official patches are released, consider isolating environments where models are loaded, such as using containerization or sandboxing techniques, to limit the impact of potential exploitation. Monitoring and logging model loading activities can help detect suspicious behavior. Additionally, organizations should track updates from Google and apply security patches promptly once available. Reviewing and restricting permissions of users who load models can reduce the attack surface. Finally, integrating security reviews into the ML model lifecycle, including code and artifact audits, can help identify and mitigate risks early.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Data Version
5.1
Assigner Short Name
Google
Date Reserved
2025-08-08T09:37:17.811Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 68999c95ad5a09ad00224b5d

Added to database: 8/11/2025, 7:32:37 AM

Last enriched: 2/27/2026, 4:24:43 AM

Last updated: 3/25/2026, 9:22:29 AM

Views: 163

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses