Skip to main content

CVE-2025-1550: CWE-94: Improper Control of Generation of Code ('Code Injection') in Google Keras

High
VulnerabilityCVE-2025-1550cvecve-2025-1550cwe-94
Published: Tue Mar 11 2025 (03/11/2025, 08:12:34 UTC)
Source: CVE Database V5
Vendor/Project: Google
Product: Keras

Description

The Keras Model.load_model function permits arbitrary code execution, even with safe_mode=True, through a manually constructed, malicious .keras archive. By altering the config.json file within the archive, an attacker can specify arbitrary Python modules and functions, along with their arguments, to be loaded and executed during model loading.

AI-Powered Analysis

AILast updated: 07/30/2025, 01:26:25 UTC

Technical Analysis

CVE-2025-1550 is a high-severity vulnerability in Google Keras version 3.0.0, specifically within the Model.load_model function. This vulnerability arises from improper control over code generation, classified under CWE-94 (Improper Control of Generation of Code, or Code Injection). The flaw allows an attacker to craft a malicious .keras archive file that contains a manipulated config.json. By altering this configuration file, the attacker can specify arbitrary Python modules and functions, along with their arguments, which are then loaded and executed during the model loading process. Notably, this arbitrary code execution occurs even when the safe_mode parameter is set to True, indicating that the built-in safety mechanisms are insufficient to prevent exploitation. The attack vector requires local access (AV:L) and low privileges (PR:L), but does require user interaction (UI:A) and partial authentication (AT:P). The vulnerability impacts confidentiality, integrity, and availability at a high level (VC:H, VI:H, VA:H), and the scope of the impact is high (SC:H), meaning it can affect multiple components or systems. Exploitation could lead to execution of arbitrary code within the context of the user running the Keras model load operation, potentially allowing attackers to compromise systems, exfiltrate data, or disrupt machine learning workflows. There are no known exploits in the wild at the time of publication, and no patches have been linked yet. This vulnerability is critical for environments that rely on loading Keras models from untrusted or external sources, especially in automated or production pipelines where model files may be fetched or shared without stringent validation.

Potential Impact

For European organizations, the impact of CVE-2025-1550 is significant, particularly those involved in AI/ML development, research, and deployment. Organizations using Keras 3.0.0 to load models from external or third-party sources risk arbitrary code execution, which could lead to data breaches, system compromise, or disruption of critical AI services. This is especially concerning for sectors such as finance, healthcare, automotive, and telecommunications, where AI models are increasingly integrated into decision-making and operational processes. The vulnerability could be exploited to execute malicious payloads, steal sensitive data, or sabotage AI workflows, potentially causing reputational damage and regulatory non-compliance under GDPR and other data protection laws. The requirement for local access and user interaction somewhat limits remote exploitation but does not eliminate risk, as insiders or compromised users could trigger the exploit. Additionally, supply chain attacks involving poisoned model files could be a vector, impacting organizations that consume models from third-party vendors or open-source repositories.

Mitigation Recommendations

European organizations should implement several specific mitigations beyond generic advice: 1) Avoid loading Keras models from untrusted or unauthenticated sources. Establish strict provenance and integrity verification for all model files, including cryptographic signatures or checksums. 2) Implement sandboxing or containerization for environments where models are loaded, limiting the impact of potential code execution. 3) Monitor and restrict the use of Keras 3.0.0 in production environments until a patch is available. Consider downgrading to earlier, unaffected versions if feasible. 4) Enhance user training and awareness to prevent inadvertent loading of malicious models, especially in data science and ML teams. 5) Employ runtime application self-protection (RASP) or endpoint detection and response (EDR) tools to detect anomalous behavior during model loading. 6) Collaborate with vendors and the open-source community to track patch releases and apply updates promptly once available. 7) Review and harden access controls to limit who can load or deploy models, minimizing the risk of exploitation via insider threats or compromised credentials.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
Google
Date Reserved
2025-02-21T11:13:03.951Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 687fb240a83201eaac1d91b0

Added to database: 7/22/2025, 3:46:08 PM

Last enriched: 7/30/2025, 1:26:25 AM

Last updated: 9/1/2025, 6:20:41 PM

Views: 28

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats