Skip to main content

CVE-2022-35968: CWE-617: Reachable Assertion in tensorflow tensorflow

Medium
Published: Fri Sep 16 2022 (09/16/2022, 20:40:10 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. The implementation of `AvgPoolGrad` does not fully validate the input `orig_input_shape`. This results in a `CHECK` failure which can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 3a6ac52664c6c095aa2b114e742b0aa17fdce78f. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 20:06:08 UTC

Technical Analysis

CVE-2022-35968 is a vulnerability identified in TensorFlow, an open-source machine learning platform widely used for developing and deploying machine learning models. The issue arises from the implementation of the `AvgPoolGrad` function, which is responsible for computing gradients during the backpropagation phase of average pooling layers in neural networks. Specifically, the vulnerability is due to insufficient validation of the input parameter `orig_input_shape`. This lack of proper validation can cause the program to hit a `CHECK` failure — an assertion designed to verify internal assumptions — leading to an abrupt termination of the TensorFlow process. This behavior effectively results in a denial of service (DoS) condition, where legitimate machine learning workloads or services relying on TensorFlow can be unexpectedly interrupted or stopped. The vulnerability affects multiple TensorFlow versions: all versions prior to 2.7.2, versions from 2.8.0 up to but not including 2.8.1, and versions from 2.9.0 up to but not including 2.9.1. The issue has been patched in TensorFlow 2.10.0 and backported to supported versions 2.7.2, 2.8.1, and 2.9.1. No known workarounds exist, meaning that upgrading to a patched version is the primary remediation. There are no reports of active exploitation in the wild, indicating that while the vulnerability is exploitable, it has not yet been weaponized or widely abused. The vulnerability is classified under CWE-617 (Reachable Assertion), which typically involves assertions in code that can be triggered by external input, leading to program termination or denial of service. The exploitation requires feeding malformed input shapes to the vulnerable function, which may require some level of access to the machine learning pipeline or model training environment. User interaction is not necessarily required beyond the ability to submit or influence input data to TensorFlow. Overall, this vulnerability poses a risk primarily to environments where TensorFlow is used in production or critical workflows, especially those that process untrusted or external data inputs.

Potential Impact

For European organizations, the impact of CVE-2022-35968 centers on availability disruption of machine learning services and workflows. Organizations relying on TensorFlow for critical AI applications—such as financial institutions using ML for fraud detection, healthcare providers employing AI for diagnostics, or manufacturing firms leveraging predictive maintenance—may experience service interruptions if the vulnerability is triggered. The denial of service could lead to downtime, delayed processing, or failure of automated decision-making systems, potentially causing operational inefficiencies or financial loss. Since TensorFlow is widely adopted across industries in Europe, the scope of impact can be broad, particularly in sectors with high AI integration. Additionally, organizations that expose TensorFlow-based services to external users or partners may face increased risk if attackers can supply crafted inputs to trigger the assertion failure. Although the vulnerability does not directly compromise confidentiality or integrity, the availability impact can indirectly affect business continuity and trust in AI systems. The absence of known exploits reduces immediate risk, but the presence of a patch and the medium severity rating indicate that timely remediation is important to prevent future exploitation. Given the lack of workarounds, organizations must prioritize patching to maintain service reliability.

Mitigation Recommendations

1. Immediate Upgrade: Organizations should upgrade TensorFlow installations to version 2.10.0 or later, or to the patched versions 2.7.2, 2.8.1, or 2.9.1 if they are using those branches. This is the only effective mitigation since no workarounds exist. 2. Input Validation Controls: Implement additional validation at the application or orchestration layer to ensure that input shapes or parameters passed to TensorFlow are sanitized and conform to expected ranges and formats. This can reduce the risk of triggering the assertion failure from malformed inputs. 3. Isolation and Access Controls: Restrict access to TensorFlow model training and inference environments to trusted users and systems only. Limit exposure of TensorFlow services to untrusted networks or users to reduce the attack surface. 4. Monitoring and Alerting: Deploy monitoring to detect unexpected TensorFlow process crashes or service interruptions that could indicate exploitation attempts. Logging input parameters and failures can help identify anomalous patterns. 5. Testing and Validation: Incorporate fuzz testing or input validation tests targeting the `AvgPoolGrad` function or similar components to proactively detect assertion failures during development and staging before deployment. 6. Incident Response Preparedness: Prepare response plans for potential denial of service incidents affecting AI services, including fallback mechanisms or redundancy to maintain availability during remediation.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf4063

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 8:06:08 PM

Last updated: 8/15/2025, 3:57:44 AM

Views: 18

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats