Skip to main content

CVE-2022-35994: CWE-617: Reachable Assertion in tensorflow tensorflow

Medium
Published: Fri Sep 16 2022 (09/16/2022, 22:20:31 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. When `CollectiveGather` receives an scalar input `input`, it gives a `CHECK` fails that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit c1f491817dec39a26be3c574e86a88c30f3c4770. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 18:20:25 UTC

Technical Analysis

CVE-2022-35994 is a medium-severity vulnerability in TensorFlow, an open-source machine learning platform widely used for developing and deploying ML models. The issue arises in the `CollectiveGather` operation, which is designed to aggregate tensors across multiple devices or nodes. Specifically, when `CollectiveGather` receives a scalar input instead of the expected tensor shape, it triggers a reachable assertion failure (CWE-617). This assertion failure causes the program to abort, leading to a denial of service (DoS) condition. The vulnerability affects multiple TensorFlow versions: all versions prior to 2.7.2, versions from 2.8.0 up to but not including 2.8.1, and versions from 2.9.0 up to but not including 2.9.1. The issue was patched in commit c1f491817dec39a26be3c574e86a88c30f3c4770 and the fix is included in TensorFlow 2.10.0, with backports planned for 2.7.2, 2.8.1, and 2.9.1. No known workarounds exist, and no exploits have been observed in the wild to date. The vulnerability does not require authentication or user interaction to be triggered, but it requires the attacker to supply crafted inputs to the vulnerable `CollectiveGather` operation within TensorFlow. Since TensorFlow is often embedded in larger applications or services, exploitation requires access to the ML pipeline or model serving infrastructure. The impact is limited to denial of service via process crash, with no direct evidence of confidentiality or integrity compromise. However, disruption of ML services can have significant operational consequences in environments relying on TensorFlow for critical workloads.

Potential Impact

For European organizations, the primary impact of this vulnerability is service disruption. Organizations using TensorFlow for machine learning model training or inference in production environments may experience unexpected crashes or downtime if an attacker or faulty input triggers the assertion failure. This can affect sectors such as finance, healthcare, automotive, and manufacturing, where ML models are increasingly integrated into critical decision-making systems. The denial of service could lead to loss of availability of AI-powered services, delayed processing, and potential financial or reputational damage. Since TensorFlow is also used in research and development, the vulnerability could interrupt ongoing experiments or data processing tasks. Although no data breach or code execution is indicated, the operational impact on availability can be significant, especially in environments with high automation or real-time ML inference requirements. The lack of known exploits reduces immediate risk, but the widespread use of TensorFlow in European enterprises and public sector organizations means that unpatched systems remain vulnerable to accidental or malicious triggering of this DoS condition.

Mitigation Recommendations

European organizations should prioritize upgrading TensorFlow to version 2.10.0 or later, or apply the backported patches for versions 2.7.2, 2.8.1, and 2.9.1 as soon as possible. Since no workarounds exist, patching is the primary mitigation. Additionally, organizations should implement input validation and sanitization in their ML pipelines to ensure that scalar inputs are not passed to `CollectiveGather` operations inadvertently or maliciously. Monitoring and logging of TensorFlow service crashes can help detect attempts to exploit this vulnerability. Deploying TensorFlow within containerized or isolated environments can limit the blast radius of a crash. For critical production systems, consider implementing redundancy and failover mechanisms to maintain availability if a TensorFlow process crashes. Finally, restrict access to model serving endpoints and ML infrastructure to trusted users and networks to reduce the risk of malicious input injection.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf42e4

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 6:20:25 PM

Last updated: 8/17/2025, 7:18:41 PM

Views: 13

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats