CVE-2025-8747: CWE-502 Deserialization of Untrusted Data in Google Keras
A safe mode bypass vulnerability in the `Model.load_model` method in Keras versions 3.0.0 through 3.10.0 allows an attacker to achieve arbitrary code execution by convincing a user to load a specially crafted `.keras` model archive.
AI Analysis
Technical Summary
CVE-2025-8747 is a high-severity vulnerability affecting Google Keras versions 3.0.0 through 3.10.0, specifically in the `Model.load_model` method. This vulnerability is classified under CWE-502, which involves deserialization of untrusted data. The core issue is a safe mode bypass that allows an attacker to craft a malicious `.keras` model archive. When a user loads this specially crafted model using the vulnerable `load_model` method, it can lead to arbitrary code execution on the host system. The vulnerability arises because the deserialization process does not adequately validate or restrict the contents of the model archive, enabling execution of malicious payloads embedded within the model file. The CVSS 4.0 score is 8.6, indicating a high severity with a vector that requires local access (AV:L), low attack complexity (AC:L), no attack traceability (AT:N), low privileges (PR:L), and user interaction (UI:P). The vulnerability impacts confidentiality, integrity, and availability at a high level due to the potential for arbitrary code execution, which could lead to system compromise, data theft, or disruption of machine learning workflows. No known exploits are currently in the wild, and no patches have been linked yet, indicating that organizations using affected versions should prioritize mitigation and monitoring. This vulnerability is particularly critical in environments where Keras models are shared or loaded from untrusted sources, such as collaborative research, third-party model marketplaces, or automated model deployment pipelines.
Potential Impact
For European organizations, the impact of CVE-2025-8747 can be significant, especially those relying heavily on machine learning frameworks like Keras for AI-driven applications in sectors such as finance, healthcare, automotive, and manufacturing. Arbitrary code execution could lead to unauthorized access to sensitive data, disruption of AI services, and potential lateral movement within networks. Organizations involved in AI research and development or those deploying AI models in production environments are at heightened risk. The vulnerability could be exploited by attackers who trick users into loading malicious models, potentially leading to data breaches, intellectual property theft, or sabotage of AI systems. Given the increasing adoption of AI technologies across Europe, this vulnerability poses a risk to critical infrastructure and commercial enterprises, potentially affecting compliance with GDPR and other data protection regulations if personal data is compromised.
Mitigation Recommendations
To mitigate this vulnerability, European organizations should: 1) Immediately audit and inventory all Keras installations to identify affected versions (3.0.0 through 3.10.0). 2) Avoid loading Keras model files from untrusted or unauthenticated sources. Implement strict validation and integrity checks (e.g., digital signatures or hashes) on all model files before loading. 3) Employ network segmentation and least privilege principles to limit the impact of potential exploitation, ensuring that users running Keras have minimal system privileges. 4) Monitor for unusual activity related to model loading operations and implement endpoint detection and response (EDR) tools to detect suspicious code execution patterns. 5) Stay updated with Google’s security advisories for patches or updates addressing this vulnerability and apply them promptly once available. 6) Consider sandboxing or containerizing AI model loading processes to isolate potential malicious code execution. 7) Educate users and developers about the risks of loading untrusted models and enforce policies restricting model sources.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Italy, Spain
CVE-2025-8747: CWE-502 Deserialization of Untrusted Data in Google Keras
Description
A safe mode bypass vulnerability in the `Model.load_model` method in Keras versions 3.0.0 through 3.10.0 allows an attacker to achieve arbitrary code execution by convincing a user to load a specially crafted `.keras` model archive.
AI-Powered Analysis
Technical Analysis
CVE-2025-8747 is a high-severity vulnerability affecting Google Keras versions 3.0.0 through 3.10.0, specifically in the `Model.load_model` method. This vulnerability is classified under CWE-502, which involves deserialization of untrusted data. The core issue is a safe mode bypass that allows an attacker to craft a malicious `.keras` model archive. When a user loads this specially crafted model using the vulnerable `load_model` method, it can lead to arbitrary code execution on the host system. The vulnerability arises because the deserialization process does not adequately validate or restrict the contents of the model archive, enabling execution of malicious payloads embedded within the model file. The CVSS 4.0 score is 8.6, indicating a high severity with a vector that requires local access (AV:L), low attack complexity (AC:L), no attack traceability (AT:N), low privileges (PR:L), and user interaction (UI:P). The vulnerability impacts confidentiality, integrity, and availability at a high level due to the potential for arbitrary code execution, which could lead to system compromise, data theft, or disruption of machine learning workflows. No known exploits are currently in the wild, and no patches have been linked yet, indicating that organizations using affected versions should prioritize mitigation and monitoring. This vulnerability is particularly critical in environments where Keras models are shared or loaded from untrusted sources, such as collaborative research, third-party model marketplaces, or automated model deployment pipelines.
Potential Impact
For European organizations, the impact of CVE-2025-8747 can be significant, especially those relying heavily on machine learning frameworks like Keras for AI-driven applications in sectors such as finance, healthcare, automotive, and manufacturing. Arbitrary code execution could lead to unauthorized access to sensitive data, disruption of AI services, and potential lateral movement within networks. Organizations involved in AI research and development or those deploying AI models in production environments are at heightened risk. The vulnerability could be exploited by attackers who trick users into loading malicious models, potentially leading to data breaches, intellectual property theft, or sabotage of AI systems. Given the increasing adoption of AI technologies across Europe, this vulnerability poses a risk to critical infrastructure and commercial enterprises, potentially affecting compliance with GDPR and other data protection regulations if personal data is compromised.
Mitigation Recommendations
To mitigate this vulnerability, European organizations should: 1) Immediately audit and inventory all Keras installations to identify affected versions (3.0.0 through 3.10.0). 2) Avoid loading Keras model files from untrusted or unauthenticated sources. Implement strict validation and integrity checks (e.g., digital signatures or hashes) on all model files before loading. 3) Employ network segmentation and least privilege principles to limit the impact of potential exploitation, ensuring that users running Keras have minimal system privileges. 4) Monitor for unusual activity related to model loading operations and implement endpoint detection and response (EDR) tools to detect suspicious code execution patterns. 5) Stay updated with Google’s security advisories for patches or updates addressing this vulnerability and apply them promptly once available. 6) Consider sandboxing or containerizing AI model loading processes to isolate potential malicious code execution. 7) Educate users and developers about the risks of loading untrusted models and enforce policies restricting model sources.
Affected Countries
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- Date Reserved
- 2025-08-08T09:37:17.811Z
- Cvss Version
- 4.0
- State
- PUBLISHED
Threat ID: 68999c95ad5a09ad00224b5d
Added to database: 8/11/2025, 7:32:37 AM
Last enriched: 8/19/2025, 1:27:32 AM
Last updated: 9/22/2025, 10:42:31 PM
Views: 60
Related Threats
CVE-2025-9962: CWE-120 Buffer Copy without Checking Size of Input ('Classic Buffer Overflow') in Novakon P series
CriticalCVE-2025-10412: CWE-434 Unrestricted Upload of File with Dangerous Type in MooMoo Product Options and Price Calculation Formulas for WooCommerce – Uni CPO (Premium)
CriticalCVE-2025-9798: CWE-79 Improper Neutralization of Input During Web Page Generation (XSS or 'Cross-site Scripting') in Netcad Software Inc. Netigma
HighCVE-2025-10857: SQL Injection in Campcodes Point of Sale System POS
MediumCVE-2025-10147: CWE-434 Unrestricted Upload of File with Dangerous Type in eteubert Podlove Podcast Publisher
CriticalActions
Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.