CVE-2026-1669: CWE-73 External Control of File Name or Path in Google Keras
CVE-2026-1669 is a high-severity vulnerability in Google Keras versions 3. 0. 0 through 3. 13. 1 that allows remote attackers to read arbitrary local files via crafted . keras model files exploiting HDF5 external dataset references. This flaw arises from improper external control of file names or paths during the model loading process, potentially exposing sensitive information without requiring authentication but needing user interaction to load the malicious model. The vulnerability impacts confidentiality significantly, with limited impact on integrity and availability. No known exploits are currently reported in the wild. European organizations using affected Keras versions in machine learning workflows are at risk, especially those handling sensitive data or intellectual property.
AI Analysis
Technical Summary
CVE-2026-1669 is a vulnerability identified in Google Keras, a widely used deep learning framework, specifically affecting versions 3.0.0 through 3.13.1. The issue stems from the model loading mechanism's integration with the HDF5 file format, which supports external dataset references. An attacker can craft a malicious .keras model file that includes HDF5 external dataset references pointing to arbitrary files on the victim's local file system. When such a model is loaded, Keras inadvertently reads and discloses the contents of these local files, leading to an arbitrary file read vulnerability. This vulnerability is categorized under CWE-73 (External Control of File Name or Path) and CWE-200 (Information Exposure). The attack requires no privileges or authentication but does require user interaction to load the malicious model file. The CVSS 4.0 base score is 7.1, reflecting a network attack vector with low attack complexity, no privileges required, and user interaction needed. The impact on confidentiality is high due to potential exposure of sensitive files, while integrity and availability impacts are low. No patches were listed at the time of reporting, and no known exploits have been observed in the wild. The vulnerability affects all supported platforms running the vulnerable Keras versions, making it broadly relevant to environments using Keras for machine learning model development or deployment.
Potential Impact
For European organizations, the arbitrary file read vulnerability poses a significant risk to confidentiality, potentially exposing sensitive intellectual property, personal data, or credentials stored on systems running vulnerable Keras versions. Organizations involved in AI research, healthcare, finance, or critical infrastructure that utilize Keras for model training or inference could be targeted to leak proprietary models or sensitive datasets. The vulnerability could facilitate further attacks by revealing configuration files or secrets that enable lateral movement or privilege escalation. Although the vulnerability does not directly impact system integrity or availability, the loss of confidentiality can have severe regulatory and reputational consequences, especially under GDPR and other data protection laws. The requirement for user interaction means social engineering or supply chain attacks (e.g., sharing malicious model files) are likely attack vectors. The absence of known exploits in the wild suggests a window for proactive mitigation, but the widespread use of Keras in European AI ecosystems increases the potential attack surface.
Mitigation Recommendations
European organizations should implement strict validation and provenance checks on all .keras model files before loading them, especially those obtained from untrusted or external sources. Employ sandboxing or containerization techniques to isolate the model loading process, limiting file system access to prevent unauthorized reads. Monitor and restrict the use of HDF5 external dataset references within models or disable this feature if not required. Maintain up-to-date inventories of Keras versions in use and prioritize upgrading to patched versions once available from Google. Incorporate security training to raise awareness about the risks of loading untrusted machine learning models. Use endpoint detection and response (EDR) tools to monitor suspicious file access patterns during model loading. Collaborate with software supply chain teams to ensure integrity of AI model repositories and distribution channels. Finally, engage with Google’s security advisories for timely updates and patches.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Italy
CVE-2026-1669: CWE-73 External Control of File Name or Path in Google Keras
Description
CVE-2026-1669 is a high-severity vulnerability in Google Keras versions 3. 0. 0 through 3. 13. 1 that allows remote attackers to read arbitrary local files via crafted . keras model files exploiting HDF5 external dataset references. This flaw arises from improper external control of file names or paths during the model loading process, potentially exposing sensitive information without requiring authentication but needing user interaction to load the malicious model. The vulnerability impacts confidentiality significantly, with limited impact on integrity and availability. No known exploits are currently reported in the wild. European organizations using affected Keras versions in machine learning workflows are at risk, especially those handling sensitive data or intellectual property.
AI-Powered Analysis
Technical Analysis
CVE-2026-1669 is a vulnerability identified in Google Keras, a widely used deep learning framework, specifically affecting versions 3.0.0 through 3.13.1. The issue stems from the model loading mechanism's integration with the HDF5 file format, which supports external dataset references. An attacker can craft a malicious .keras model file that includes HDF5 external dataset references pointing to arbitrary files on the victim's local file system. When such a model is loaded, Keras inadvertently reads and discloses the contents of these local files, leading to an arbitrary file read vulnerability. This vulnerability is categorized under CWE-73 (External Control of File Name or Path) and CWE-200 (Information Exposure). The attack requires no privileges or authentication but does require user interaction to load the malicious model file. The CVSS 4.0 base score is 7.1, reflecting a network attack vector with low attack complexity, no privileges required, and user interaction needed. The impact on confidentiality is high due to potential exposure of sensitive files, while integrity and availability impacts are low. No patches were listed at the time of reporting, and no known exploits have been observed in the wild. The vulnerability affects all supported platforms running the vulnerable Keras versions, making it broadly relevant to environments using Keras for machine learning model development or deployment.
Potential Impact
For European organizations, the arbitrary file read vulnerability poses a significant risk to confidentiality, potentially exposing sensitive intellectual property, personal data, or credentials stored on systems running vulnerable Keras versions. Organizations involved in AI research, healthcare, finance, or critical infrastructure that utilize Keras for model training or inference could be targeted to leak proprietary models or sensitive datasets. The vulnerability could facilitate further attacks by revealing configuration files or secrets that enable lateral movement or privilege escalation. Although the vulnerability does not directly impact system integrity or availability, the loss of confidentiality can have severe regulatory and reputational consequences, especially under GDPR and other data protection laws. The requirement for user interaction means social engineering or supply chain attacks (e.g., sharing malicious model files) are likely attack vectors. The absence of known exploits in the wild suggests a window for proactive mitigation, but the widespread use of Keras in European AI ecosystems increases the potential attack surface.
Mitigation Recommendations
European organizations should implement strict validation and provenance checks on all .keras model files before loading them, especially those obtained from untrusted or external sources. Employ sandboxing or containerization techniques to isolate the model loading process, limiting file system access to prevent unauthorized reads. Monitor and restrict the use of HDF5 external dataset references within models or disable this feature if not required. Maintain up-to-date inventories of Keras versions in use and prioritize upgrading to patched versions once available from Google. Incorporate security training to raise awareness about the risks of loading untrusted machine learning models. Use endpoint detection and response (EDR) tools to monitor suspicious file access patterns during model loading. Collaborate with software supply chain teams to ensure integrity of AI model repositories and distribution channels. Finally, engage with Google’s security advisories for timely updates and patches.
Affected Countries
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- Date Reserved
- 2026-01-29T22:48:03.030Z
- Cvss Version
- 4.0
- State
- PUBLISHED
Threat ID: 698d03334b57a58fa1d6e844
Added to database: 2/11/2026, 10:31:15 PM
Last enriched: 2/11/2026, 10:45:35 PM
Last updated: 2/12/2026, 12:42:12 AM
Views: 5
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
CVE-2026-20700: An attacker with memory write capability may be able to execute arbitrary code. Apple is aware of a report that this issue may have been exploited in an extremely sophisticated attack against specific targeted individuals on versions of iOS before iOS 26. CVE-2025-14174 and CVE-2025-43529 were also issued in response to this report. in Apple macOS
CriticalCVE-2026-20682: An attacker may be able to discover a user’s deleted notes in Apple iOS and iPadOS
HighCVE-2026-20681: An app may be able to access information about a user's contacts in Apple macOS
MediumCVE-2026-20680: A sandboxed app may be able to access sensitive user data in Apple macOS
HighCVE-2026-20678: An app may be able to access sensitive user data in Apple iOS and iPadOS
HighActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need more coverage?
Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.