Skip to main content

CVE-2022-35992: CWE-617: Reachable Assertion in tensorflow tensorflow

Medium
Published: Fri Sep 16 2022 (09/16/2022, 22:20:21 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. When `TensorListFromTensor` receives an `element_shape` of a rank greater than one, it gives a `CHECK` fail that can trigger a denial of service attack. We have patched the issue in GitHub commit 3db59a042a38f4338aa207922fa2f476e000a6ee. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 18:20:55 UTC

Technical Analysis

CVE-2022-35992 is a vulnerability in TensorFlow, an open-source machine learning platform widely used for developing and deploying machine learning models. The issue arises in the function `TensorListFromTensor` when it receives an `element_shape` parameter with a rank greater than one. This triggers a `CHECK` failure, which is an assertion mechanism used internally by TensorFlow to validate assumptions during execution. When this assertion fails, it causes the process to terminate abruptly, leading to a denial of service (DoS) condition. This vulnerability is categorized under CWE-617 (Reachable Assertion), indicating that an attacker can deliberately cause an assertion failure by providing crafted input. The affected versions include TensorFlow versions prior to 2.7.2, versions between 2.8.0 and 2.8.1, and versions between 2.9.0 and 2.9.1. The issue has been patched in TensorFlow 2.10.0 and backported to supported versions 2.7.2, 2.8.1, and 2.9.1. No known workarounds exist, meaning users must update to a patched version to remediate the vulnerability. Exploitation does not require authentication or user interaction, but it requires the ability to supply malicious input to the TensorFlow process, which is typically possible in environments where TensorFlow processes untrusted data or models. There are no known exploits in the wild at the time of publication, but the vulnerability could be leveraged to disrupt machine learning services by causing crashes or downtime.

Potential Impact

The primary impact of this vulnerability is denial of service, which affects the availability of machine learning services relying on vulnerable TensorFlow versions. For European organizations, especially those in sectors heavily dependent on AI and machine learning such as finance, healthcare, automotive, and telecommunications, this could lead to service interruptions, degraded operational efficiency, and potential financial losses. Organizations deploying TensorFlow in production environments or offering AI-as-a-service platforms could experience outages if attackers exploit this vulnerability by submitting malicious inputs. Although the vulnerability does not directly compromise confidentiality or integrity, the disruption of AI workflows could indirectly affect business continuity and trust in AI-driven applications. Additionally, organizations involved in critical infrastructure or research may face operational setbacks. Given the lack of known exploits, the immediate risk is moderate, but the widespread use of TensorFlow in Europe and the absence of workarounds elevate the importance of timely patching.

Mitigation Recommendations

To mitigate this vulnerability, European organizations should prioritize upgrading TensorFlow installations to version 2.10.0 or later, or apply the backported patches available in versions 2.7.2, 2.8.1, and 2.9.1. Since no workarounds exist, patching is the only effective remediation. Organizations should audit their environments to identify all instances of TensorFlow, including embedded systems, cloud services, and containerized deployments. Implement strict input validation and sanitization where possible to reduce the risk of malformed inputs reaching TensorFlow processes. Employ runtime monitoring and anomaly detection to identify unusual crashes or assertion failures that may indicate exploitation attempts. For environments where immediate patching is not feasible, consider isolating TensorFlow services behind strict access controls and network segmentation to limit exposure to untrusted inputs. Additionally, integrate vulnerability management processes to track TensorFlow updates and apply patches promptly. Finally, conduct security awareness training for developers and data scientists to recognize the importance of using supported TensorFlow versions and secure coding practices.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf42dc

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 6:20:55 PM

Last updated: 8/11/2025, 7:15:09 AM

Views: 11

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats