CVE-2025-1550: CWE-94: Improper Control of Generation of Code ('Code Injection') in Google Keras
The Keras Model.load_model function permits arbitrary code execution, even with safe_mode=True, through a manually constructed, malicious .keras archive. By altering the config.json file within the archive, an attacker can specify arbitrary Python modules and functions, along with their arguments, to be loaded and executed during model loading.
AI Analysis
Technical Summary
CVE-2025-1550 is a high-severity vulnerability in Google Keras version 3.0.0, specifically affecting the Model.load_model function. This vulnerability arises from improper control of code generation (CWE-94), allowing an attacker to achieve arbitrary code execution during the model loading process. The root cause is that the .keras archive format, which packages model data including a config.json file, can be maliciously crafted. By modifying the config.json, an attacker can specify arbitrary Python modules, functions, and their arguments to be executed when the model is loaded, bypassing the intended safe_mode protections. This means that even if safe_mode=True is set, the function does not adequately sanitize or restrict the code that can be executed, leading to a critical security risk. Exploitation requires an attacker to provide a malicious .keras file to a victim who then loads this model using the vulnerable Keras version. The vulnerability has a CVSS 4.0 base score of 7.3, reflecting its high severity, with attack vector local (AV:L), low attack complexity (AC:L), requiring privileges (PR:L), user interaction (UI:A), and partial impact on confidentiality, integrity, and availability with high scope impact. No known exploits are currently reported in the wild, but the potential for damage is significant given the ability to execute arbitrary code within the victim's environment during model loading.
Potential Impact
For European organizations, this vulnerability poses a substantial risk especially to those involved in AI/ML development, research, and deployment where Keras 3.0.0 is used. The arbitrary code execution can lead to unauthorized data access, data manipulation, or disruption of AI services. Confidentiality can be compromised if sensitive data processed by the model is accessed or exfiltrated. Integrity is at risk as attackers can alter model behavior or outputs, potentially leading to incorrect decisions or outputs in critical applications such as healthcare, finance, or autonomous systems. Availability may also be impacted if the attacker executes destructive payloads or disrupts model loading. Given the widespread adoption of Keras in European academia, industry, and government AI projects, exploitation could have cascading effects on trust and operational continuity. The requirement for local access and user interaction somewhat limits remote exploitation but does not eliminate risk in environments where untrusted models are loaded or shared. The lack of known exploits suggests the vulnerability is not yet actively weaponized, but proactive mitigation is essential to prevent future attacks.
Mitigation Recommendations
European organizations should implement several specific mitigations beyond generic patching advice: 1) Avoid loading Keras models from untrusted or unauthenticated sources. Establish strict validation and provenance checks for all model files before loading. 2) Implement sandboxing or containerization for environments where models are loaded to contain potential malicious code execution. 3) Monitor and restrict Python module imports and function calls during model loading by customizing or extending Keras loading mechanisms if feasible. 4) Employ runtime application self-protection (RASP) or endpoint detection and response (EDR) tools to detect anomalous behavior during model loading. 5) Educate developers and data scientists about the risks of loading untrusted models and enforce policies restricting model sharing. 6) Track updates from Google and apply patches promptly once available, as no patch links are currently provided. 7) Consider alternative model serialization formats or frameworks with stronger security guarantees until this vulnerability is resolved. These measures collectively reduce the attack surface and limit the impact of potential exploitation.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Belgium, Italy
CVE-2025-1550: CWE-94: Improper Control of Generation of Code ('Code Injection') in Google Keras
Description
The Keras Model.load_model function permits arbitrary code execution, even with safe_mode=True, through a manually constructed, malicious .keras archive. By altering the config.json file within the archive, an attacker can specify arbitrary Python modules and functions, along with their arguments, to be loaded and executed during model loading.
AI-Powered Analysis
Technical Analysis
CVE-2025-1550 is a high-severity vulnerability in Google Keras version 3.0.0, specifically affecting the Model.load_model function. This vulnerability arises from improper control of code generation (CWE-94), allowing an attacker to achieve arbitrary code execution during the model loading process. The root cause is that the .keras archive format, which packages model data including a config.json file, can be maliciously crafted. By modifying the config.json, an attacker can specify arbitrary Python modules, functions, and their arguments to be executed when the model is loaded, bypassing the intended safe_mode protections. This means that even if safe_mode=True is set, the function does not adequately sanitize or restrict the code that can be executed, leading to a critical security risk. Exploitation requires an attacker to provide a malicious .keras file to a victim who then loads this model using the vulnerable Keras version. The vulnerability has a CVSS 4.0 base score of 7.3, reflecting its high severity, with attack vector local (AV:L), low attack complexity (AC:L), requiring privileges (PR:L), user interaction (UI:A), and partial impact on confidentiality, integrity, and availability with high scope impact. No known exploits are currently reported in the wild, but the potential for damage is significant given the ability to execute arbitrary code within the victim's environment during model loading.
Potential Impact
For European organizations, this vulnerability poses a substantial risk especially to those involved in AI/ML development, research, and deployment where Keras 3.0.0 is used. The arbitrary code execution can lead to unauthorized data access, data manipulation, or disruption of AI services. Confidentiality can be compromised if sensitive data processed by the model is accessed or exfiltrated. Integrity is at risk as attackers can alter model behavior or outputs, potentially leading to incorrect decisions or outputs in critical applications such as healthcare, finance, or autonomous systems. Availability may also be impacted if the attacker executes destructive payloads or disrupts model loading. Given the widespread adoption of Keras in European academia, industry, and government AI projects, exploitation could have cascading effects on trust and operational continuity. The requirement for local access and user interaction somewhat limits remote exploitation but does not eliminate risk in environments where untrusted models are loaded or shared. The lack of known exploits suggests the vulnerability is not yet actively weaponized, but proactive mitigation is essential to prevent future attacks.
Mitigation Recommendations
European organizations should implement several specific mitigations beyond generic patching advice: 1) Avoid loading Keras models from untrusted or unauthenticated sources. Establish strict validation and provenance checks for all model files before loading. 2) Implement sandboxing or containerization for environments where models are loaded to contain potential malicious code execution. 3) Monitor and restrict Python module imports and function calls during model loading by customizing or extending Keras loading mechanisms if feasible. 4) Employ runtime application self-protection (RASP) or endpoint detection and response (EDR) tools to detect anomalous behavior during model loading. 5) Educate developers and data scientists about the risks of loading untrusted models and enforce policies restricting model sharing. 6) Track updates from Google and apply patches promptly once available, as no patch links are currently provided. 7) Consider alternative model serialization formats or frameworks with stronger security guarantees until this vulnerability is resolved. These measures collectively reduce the attack surface and limit the impact of potential exploitation.
Affected Countries
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- Date Reserved
- 2025-02-21T11:13:03.951Z
- Cvss Version
- 4.0
- State
- PUBLISHED
Threat ID: 687fb240a83201eaac1d91b0
Added to database: 7/22/2025, 3:46:08 PM
Last enriched: 7/22/2025, 4:01:11 PM
Last updated: 7/23/2025, 12:39:44 AM
Views: 3
Related Threats
CVE-2025-42947: CWE-94: Improper Control of Generation of Code in SAP_SE SAP FICA ODN framework
MediumCVE-2025-7722: CWE-272 Least Privilege Violation in steverio Social Streams
HighCVE-2025-6261: CWE-79 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') in fleetwire Fleetwire Fleet Management
MediumCVE-2025-6215: CWE-862 Missing Authorization in omnishop Omnishop – Mobile shop apps complementing your WooCommerce webshop
MediumCVE-2025-6214: CWE-352 Cross-Site Request Forgery (CSRF) in omnishop Omnishop – Mobile shop apps complementing your WooCommerce webshop
MediumActions
Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.