Skip to main content

CVE-2022-35988: CWE-617: Reachable Assertion in tensorflow tensorflow

Medium
Published: Fri Sep 16 2022 (09/16/2022, 21:35:10 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. When `tf.linalg.matrix_rank` receives an empty input `a`, the GPU kernel gives a `CHECK` fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit c55b476aa0e0bd4ee99d0f3ad18d9d706cd1260a. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 19:36:46 UTC

Technical Analysis

CVE-2022-35988 is a vulnerability identified in TensorFlow, an open-source machine learning platform widely used for developing and deploying machine learning models. The issue arises specifically in the function tf.linalg.matrix_rank when it processes an empty input matrix 'a'. Under these conditions, the GPU kernel triggers a CHECK failure, which is an assertion that halts execution, leading to a denial of service (DoS) condition. This reachable assertion (CWE-617) means that an attacker can deliberately supply an empty matrix to cause the TensorFlow process to crash or become unavailable. The vulnerability affects multiple TensorFlow versions: all versions prior to 2.7.2, versions from 2.8.0 up to but not including 2.8.1, and versions from 2.9.0 up to but not including 2.9.1. The issue has been patched in TensorFlow 2.10.0 and backported to supported versions 2.7.2, 2.8.1, and 2.9.1. There are no known workarounds, meaning that users must update to a fixed version to mitigate the risk. While no exploits have been observed in the wild, the vulnerability is straightforward to trigger by providing crafted input, making it a credible threat to availability in environments running vulnerable TensorFlow versions, especially those utilizing GPU acceleration.

Potential Impact

For European organizations, the primary impact of this vulnerability is a denial of service condition affecting machine learning workloads that utilize TensorFlow with GPU support. Organizations relying on TensorFlow for critical AI/ML applications—such as financial institutions performing fraud detection, healthcare providers analyzing medical data, or manufacturing firms using predictive maintenance—may experience service interruptions or system crashes if exposed to malicious or malformed inputs. This could lead to operational downtime, loss of productivity, and potential cascading effects on dependent systems. Since the vulnerability does not lead to code execution or data leakage, confidentiality and integrity impacts are minimal. However, availability degradation in AI-driven services can have significant business consequences, especially in sectors where real-time data processing is essential. Given TensorFlow's widespread adoption in research institutions and enterprises across Europe, the risk is non-trivial. Additionally, the lack of workarounds means that organizations must prioritize patching to maintain service continuity.

Mitigation Recommendations

European organizations should take the following specific steps to mitigate this vulnerability: 1) Inventory all TensorFlow deployments, including development, testing, and production environments, to identify affected versions. 2) Prioritize upgrading TensorFlow installations to version 2.10.0 or later, or apply the backported patches available for versions 2.7.2, 2.8.1, and 2.9.1. 3) For environments where immediate patching is not feasible, implement input validation at the application layer to prevent empty matrices from being passed to tf.linalg.matrix_rank, effectively blocking the trigger condition. 4) Monitor logs and application behavior for unexpected crashes or GPU kernel failures indicative of exploitation attempts. 5) In containerized or cloud environments, update container images and machine learning pipelines to incorporate patched TensorFlow versions. 6) Educate data scientists and developers about the vulnerability to avoid inadvertently triggering the assertion during model development or testing. 7) Establish incident response procedures to quickly recover from potential DoS incidents caused by this vulnerability.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf40f6

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 7:36:46 PM

Last updated: 8/18/2025, 11:28:02 PM

Views: 18

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats