CVE-2025-1550: CWE-94: Improper Control of Generation of Code ('Code Injection') in Google Keras
The Keras Model.load_model function permits arbitrary code execution, even with safe_mode=True, through a manually constructed, malicious .keras archive. By altering the config.json file within the archive, an attacker can specify arbitrary Python modules and functions, along with their arguments, to be loaded and executed during model loading.
AI Analysis
Technical Summary
CVE-2025-1550 is a high-severity vulnerability in Google Keras version 3.0.0, specifically within the Model.load_model function. This vulnerability arises from improper control over code generation, classified under CWE-94 (Improper Control of Generation of Code, or Code Injection). The flaw allows an attacker to craft a malicious .keras archive file that contains a manipulated config.json. By altering this configuration file, the attacker can specify arbitrary Python modules and functions, along with their arguments, which are then loaded and executed during the model loading process. Notably, this arbitrary code execution occurs even when the safe_mode parameter is set to True, indicating that the built-in safety mechanisms are insufficient to prevent exploitation. The attack vector requires local access (AV:L) and low privileges (PR:L), but does require user interaction (UI:A) and partial authentication (AT:P). The vulnerability impacts confidentiality, integrity, and availability at a high level (VC:H, VI:H, VA:H), and the scope of the impact is high (SC:H), meaning it can affect multiple components or systems. Exploitation could lead to execution of arbitrary code within the context of the user running the Keras model load operation, potentially allowing attackers to compromise systems, exfiltrate data, or disrupt machine learning workflows. There are no known exploits in the wild at the time of publication, and no patches have been linked yet. This vulnerability is critical for environments that rely on loading Keras models from untrusted or external sources, especially in automated or production pipelines where model files may be fetched or shared without stringent validation.
Potential Impact
For European organizations, the impact of CVE-2025-1550 is significant, particularly those involved in AI/ML development, research, and deployment. Organizations using Keras 3.0.0 to load models from external or third-party sources risk arbitrary code execution, which could lead to data breaches, system compromise, or disruption of critical AI services. This is especially concerning for sectors such as finance, healthcare, automotive, and telecommunications, where AI models are increasingly integrated into decision-making and operational processes. The vulnerability could be exploited to execute malicious payloads, steal sensitive data, or sabotage AI workflows, potentially causing reputational damage and regulatory non-compliance under GDPR and other data protection laws. The requirement for local access and user interaction somewhat limits remote exploitation but does not eliminate risk, as insiders or compromised users could trigger the exploit. Additionally, supply chain attacks involving poisoned model files could be a vector, impacting organizations that consume models from third-party vendors or open-source repositories.
Mitigation Recommendations
European organizations should implement several specific mitigations beyond generic advice: 1) Avoid loading Keras models from untrusted or unauthenticated sources. Establish strict provenance and integrity verification for all model files, including cryptographic signatures or checksums. 2) Implement sandboxing or containerization for environments where models are loaded, limiting the impact of potential code execution. 3) Monitor and restrict the use of Keras 3.0.0 in production environments until a patch is available. Consider downgrading to earlier, unaffected versions if feasible. 4) Enhance user training and awareness to prevent inadvertent loading of malicious models, especially in data science and ML teams. 5) Employ runtime application self-protection (RASP) or endpoint detection and response (EDR) tools to detect anomalous behavior during model loading. 6) Collaborate with vendors and the open-source community to track patch releases and apply updates promptly once available. 7) Review and harden access controls to limit who can load or deploy models, minimizing the risk of exploitation via insider threats or compromised credentials.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Italy, Spain
CVE-2025-1550: CWE-94: Improper Control of Generation of Code ('Code Injection') in Google Keras
Description
The Keras Model.load_model function permits arbitrary code execution, even with safe_mode=True, through a manually constructed, malicious .keras archive. By altering the config.json file within the archive, an attacker can specify arbitrary Python modules and functions, along with their arguments, to be loaded and executed during model loading.
AI-Powered Analysis
Technical Analysis
CVE-2025-1550 is a high-severity vulnerability in Google Keras version 3.0.0, specifically within the Model.load_model function. This vulnerability arises from improper control over code generation, classified under CWE-94 (Improper Control of Generation of Code, or Code Injection). The flaw allows an attacker to craft a malicious .keras archive file that contains a manipulated config.json. By altering this configuration file, the attacker can specify arbitrary Python modules and functions, along with their arguments, which are then loaded and executed during the model loading process. Notably, this arbitrary code execution occurs even when the safe_mode parameter is set to True, indicating that the built-in safety mechanisms are insufficient to prevent exploitation. The attack vector requires local access (AV:L) and low privileges (PR:L), but does require user interaction (UI:A) and partial authentication (AT:P). The vulnerability impacts confidentiality, integrity, and availability at a high level (VC:H, VI:H, VA:H), and the scope of the impact is high (SC:H), meaning it can affect multiple components or systems. Exploitation could lead to execution of arbitrary code within the context of the user running the Keras model load operation, potentially allowing attackers to compromise systems, exfiltrate data, or disrupt machine learning workflows. There are no known exploits in the wild at the time of publication, and no patches have been linked yet. This vulnerability is critical for environments that rely on loading Keras models from untrusted or external sources, especially in automated or production pipelines where model files may be fetched or shared without stringent validation.
Potential Impact
For European organizations, the impact of CVE-2025-1550 is significant, particularly those involved in AI/ML development, research, and deployment. Organizations using Keras 3.0.0 to load models from external or third-party sources risk arbitrary code execution, which could lead to data breaches, system compromise, or disruption of critical AI services. This is especially concerning for sectors such as finance, healthcare, automotive, and telecommunications, where AI models are increasingly integrated into decision-making and operational processes. The vulnerability could be exploited to execute malicious payloads, steal sensitive data, or sabotage AI workflows, potentially causing reputational damage and regulatory non-compliance under GDPR and other data protection laws. The requirement for local access and user interaction somewhat limits remote exploitation but does not eliminate risk, as insiders or compromised users could trigger the exploit. Additionally, supply chain attacks involving poisoned model files could be a vector, impacting organizations that consume models from third-party vendors or open-source repositories.
Mitigation Recommendations
European organizations should implement several specific mitigations beyond generic advice: 1) Avoid loading Keras models from untrusted or unauthenticated sources. Establish strict provenance and integrity verification for all model files, including cryptographic signatures or checksums. 2) Implement sandboxing or containerization for environments where models are loaded, limiting the impact of potential code execution. 3) Monitor and restrict the use of Keras 3.0.0 in production environments until a patch is available. Consider downgrading to earlier, unaffected versions if feasible. 4) Enhance user training and awareness to prevent inadvertent loading of malicious models, especially in data science and ML teams. 5) Employ runtime application self-protection (RASP) or endpoint detection and response (EDR) tools to detect anomalous behavior during model loading. 6) Collaborate with vendors and the open-source community to track patch releases and apply updates promptly once available. 7) Review and harden access controls to limit who can load or deploy models, minimizing the risk of exploitation via insider threats or compromised credentials.
Affected Countries
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- Date Reserved
- 2025-02-21T11:13:03.951Z
- Cvss Version
- 4.0
- State
- PUBLISHED
Threat ID: 687fb240a83201eaac1d91b0
Added to database: 7/22/2025, 3:46:08 PM
Last enriched: 7/30/2025, 1:26:25 AM
Last updated: 10/17/2025, 1:44:34 PM
Views: 48
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
CVE-2023-28814: Vulnerability in Hikvision iSecure Center
Critical‘Highest Ever’ Severity Score Assigned by Microsoft to ASP.NET Core Vulnerability
HighCVE-2025-11895: CWE-639 Authorization Bypass Through User-Controlled Key in letscms Binary MLM Plan
MediumCVE-2025-55087: CWE-1285: Improper Validation of Specified Index, Position, or Offset in Input in Eclipse Foundation NextX Duo
MediumCVE-2025-55100: CWE-125 Out-of-bounds Read in Eclipse Foundation USBX
LowActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.