Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2026-1669: CWE-73 External Control of File Name or Path in Google Keras

0
High
VulnerabilityCVE-2026-1669cvecve-2026-1669cwe-73cwe-200
Published: Wed Feb 11 2026 (02/11/2026, 22:10:10 UTC)
Source: CVE Database V5
Vendor/Project: Google
Product: Keras

Description

CVE-2026-1669 is a high-severity vulnerability in Google Keras versions 3. 0. 0 through 3. 13. 1 that allows remote attackers to read arbitrary local files via crafted . keras model files exploiting HDF5 external dataset references. This flaw arises from improper external control of file names or paths during the model loading process, potentially exposing sensitive information without requiring authentication but needing user interaction to load the malicious model. The vulnerability impacts confidentiality significantly, with limited impact on integrity and availability. No known exploits are currently reported in the wild. European organizations using affected Keras versions in machine learning workflows are at risk, especially those handling sensitive data or intellectual property.

AI-Powered Analysis

AILast updated: 02/11/2026, 22:45:35 UTC

Technical Analysis

CVE-2026-1669 is a vulnerability identified in Google Keras, a widely used deep learning framework, specifically affecting versions 3.0.0 through 3.13.1. The issue stems from the model loading mechanism's integration with the HDF5 file format, which supports external dataset references. An attacker can craft a malicious .keras model file that includes HDF5 external dataset references pointing to arbitrary files on the victim's local file system. When such a model is loaded, Keras inadvertently reads and discloses the contents of these local files, leading to an arbitrary file read vulnerability. This vulnerability is categorized under CWE-73 (External Control of File Name or Path) and CWE-200 (Information Exposure). The attack requires no privileges or authentication but does require user interaction to load the malicious model file. The CVSS 4.0 base score is 7.1, reflecting a network attack vector with low attack complexity, no privileges required, and user interaction needed. The impact on confidentiality is high due to potential exposure of sensitive files, while integrity and availability impacts are low. No patches were listed at the time of reporting, and no known exploits have been observed in the wild. The vulnerability affects all supported platforms running the vulnerable Keras versions, making it broadly relevant to environments using Keras for machine learning model development or deployment.

Potential Impact

For European organizations, the arbitrary file read vulnerability poses a significant risk to confidentiality, potentially exposing sensitive intellectual property, personal data, or credentials stored on systems running vulnerable Keras versions. Organizations involved in AI research, healthcare, finance, or critical infrastructure that utilize Keras for model training or inference could be targeted to leak proprietary models or sensitive datasets. The vulnerability could facilitate further attacks by revealing configuration files or secrets that enable lateral movement or privilege escalation. Although the vulnerability does not directly impact system integrity or availability, the loss of confidentiality can have severe regulatory and reputational consequences, especially under GDPR and other data protection laws. The requirement for user interaction means social engineering or supply chain attacks (e.g., sharing malicious model files) are likely attack vectors. The absence of known exploits in the wild suggests a window for proactive mitigation, but the widespread use of Keras in European AI ecosystems increases the potential attack surface.

Mitigation Recommendations

European organizations should implement strict validation and provenance checks on all .keras model files before loading them, especially those obtained from untrusted or external sources. Employ sandboxing or containerization techniques to isolate the model loading process, limiting file system access to prevent unauthorized reads. Monitor and restrict the use of HDF5 external dataset references within models or disable this feature if not required. Maintain up-to-date inventories of Keras versions in use and prioritize upgrading to patched versions once available from Google. Incorporate security training to raise awareness about the risks of loading untrusted machine learning models. Use endpoint detection and response (EDR) tools to monitor suspicious file access patterns during model loading. Collaborate with software supply chain teams to ensure integrity of AI model repositories and distribution channels. Finally, engage with Google’s security advisories for timely updates and patches.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
Google
Date Reserved
2026-01-29T22:48:03.030Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 698d03334b57a58fa1d6e844

Added to database: 2/11/2026, 10:31:15 PM

Last enriched: 2/11/2026, 10:45:35 PM

Last updated: 2/12/2026, 12:42:12 AM

Views: 5

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats