CVE-2026-0897: CWE-770 Allocation of Resources Without Limits or Throttling in Google Keras
Allocation of Resources Without Limits or Throttling in the HDF5 weight loading component in Google Keras 3.0.0 through 3.13.0 on all platforms allows a remote attacker to cause a Denial of Service (DoS) through memory exhaustion and a crash of the Python interpreter via a crafted .keras archive containing a valid model.weights.h5 file whose dataset declares an extremely large shape.
AI Analysis
Technical Summary
CVE-2026-0897 is a resource exhaustion vulnerability classified under CWE-770, affecting Google Keras versions 3.0.0 through 3.13.0 across all platforms. The vulnerability arises in the HDF5 weight loading component responsible for reading model weights from .keras archive files. Specifically, when loading a crafted model.weights.h5 file containing a dataset that declares an extremely large shape, Keras allocates memory without imposing limits or throttling. This unchecked allocation can exhaust system memory, causing the Python interpreter to crash and resulting in a denial of service (DoS). The attack vector requires only that the vulnerable system load the malicious .keras file, which can be delivered remotely without authentication or user interaction. The CVSS 4.0 base score is 7.1, reflecting high severity due to network attack vector, low attack complexity, no privileges or user interaction required, and high impact on availability. No patches or exploit mitigations are currently linked, and no known exploits have been reported in the wild. This vulnerability poses a risk to any environment where Keras is used to load external or untrusted model files, including automated ML pipelines, model sharing platforms, and AI services.
Potential Impact
For European organizations, the primary impact is denial of service caused by memory exhaustion when processing malicious Keras model files. This can disrupt AI/ML workflows, degrade service availability, and potentially cause downtime in critical systems relying on machine learning models. Organizations in sectors such as finance, healthcare, automotive, and research that use Keras for AI development or deployment are at risk of operational interruptions. The vulnerability could be exploited to target AI infrastructure, causing service outages or forcing costly recovery efforts. Since the attack requires no authentication or user interaction, it could be triggered by automated processes that load untrusted models, increasing the risk of widespread impact. Additionally, organizations sharing or receiving models from external sources without validation may inadvertently introduce this threat. Although no data confidentiality or integrity loss is indicated, the availability impact alone can have significant business consequences.
Mitigation Recommendations
European organizations should implement strict validation and sanitization of all incoming .keras model files before loading them in Keras environments. Limiting or sandboxing the execution environment for model loading can prevent system-wide memory exhaustion. Employ resource monitoring and set memory usage limits for Python processes handling model files to detect and mitigate abnormal consumption. Where possible, upgrade Keras to versions beyond 3.13.0 once patches become available. Until patches are released, avoid loading untrusted or external model files directly. Employ network segmentation and access controls to restrict exposure of AI/ML infrastructure to untrusted sources. Incorporate anomaly detection to identify unusual crashes or memory spikes related to model loading. Finally, maintain incident response plans to quickly recover from potential denial of service events caused by this vulnerability.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Denmark, Ireland, Belgium, Italy
CVE-2026-0897: CWE-770 Allocation of Resources Without Limits or Throttling in Google Keras
Description
Allocation of Resources Without Limits or Throttling in the HDF5 weight loading component in Google Keras 3.0.0 through 3.13.0 on all platforms allows a remote attacker to cause a Denial of Service (DoS) through memory exhaustion and a crash of the Python interpreter via a crafted .keras archive containing a valid model.weights.h5 file whose dataset declares an extremely large shape.
AI-Powered Analysis
Technical Analysis
CVE-2026-0897 is a resource exhaustion vulnerability classified under CWE-770, affecting Google Keras versions 3.0.0 through 3.13.0 across all platforms. The vulnerability arises in the HDF5 weight loading component responsible for reading model weights from .keras archive files. Specifically, when loading a crafted model.weights.h5 file containing a dataset that declares an extremely large shape, Keras allocates memory without imposing limits or throttling. This unchecked allocation can exhaust system memory, causing the Python interpreter to crash and resulting in a denial of service (DoS). The attack vector requires only that the vulnerable system load the malicious .keras file, which can be delivered remotely without authentication or user interaction. The CVSS 4.0 base score is 7.1, reflecting high severity due to network attack vector, low attack complexity, no privileges or user interaction required, and high impact on availability. No patches or exploit mitigations are currently linked, and no known exploits have been reported in the wild. This vulnerability poses a risk to any environment where Keras is used to load external or untrusted model files, including automated ML pipelines, model sharing platforms, and AI services.
Potential Impact
For European organizations, the primary impact is denial of service caused by memory exhaustion when processing malicious Keras model files. This can disrupt AI/ML workflows, degrade service availability, and potentially cause downtime in critical systems relying on machine learning models. Organizations in sectors such as finance, healthcare, automotive, and research that use Keras for AI development or deployment are at risk of operational interruptions. The vulnerability could be exploited to target AI infrastructure, causing service outages or forcing costly recovery efforts. Since the attack requires no authentication or user interaction, it could be triggered by automated processes that load untrusted models, increasing the risk of widespread impact. Additionally, organizations sharing or receiving models from external sources without validation may inadvertently introduce this threat. Although no data confidentiality or integrity loss is indicated, the availability impact alone can have significant business consequences.
Mitigation Recommendations
European organizations should implement strict validation and sanitization of all incoming .keras model files before loading them in Keras environments. Limiting or sandboxing the execution environment for model loading can prevent system-wide memory exhaustion. Employ resource monitoring and set memory usage limits for Python processes handling model files to detect and mitigate abnormal consumption. Where possible, upgrade Keras to versions beyond 3.13.0 once patches become available. Until patches are released, avoid loading untrusted or external model files directly. Employ network segmentation and access controls to restrict exposure of AI/ML infrastructure to untrusted sources. Incorporate anomaly detection to identify unusual crashes or memory spikes related to model loading. Finally, maintain incident response plans to quickly recover from potential denial of service events caused by this vulnerability.
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- Date Reserved
- 2026-01-13T15:59:54.703Z
- Cvss Version
- 4.0
- State
- PUBLISHED
Threat ID: 6968f7254c611209ad1c4a6a
Added to database: 1/15/2026, 2:18:13 PM
Last enriched: 1/15/2026, 2:32:44 PM
Last updated: 1/15/2026, 5:35:56 PM
Views: 5
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
CVE-2025-70305: n/a
UnknownCVE-2026-20076: Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') in Cisco Cisco Identity Services Engine Software
MediumCVE-2026-20075: Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') in Cisco Cisco Evolved Programmable Network Manager (EPNM)
MediumCVE-2026-20047: Improper Neutralization of Script-Related HTML Tags in a Web Page (Basic XSS) in Cisco Cisco Identity Services Engine Software
MediumCVE-2025-70656: n/a
HighActions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need more coverage?
Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.