Skip to main content

CVE-2022-35990: CWE-617: Reachable Assertion in tensorflow tensorflow

Medium
Published: Fri Sep 16 2022 (09/16/2022, 22:00:12 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. When `tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient` receives input `min` or `max` of rank other than 1, it gives a `CHECK` fail that can trigger a denial of service attack. We have patched the issue in GitHub commit f3cf67ac5705f4f04721d15e485e192bb319feed. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range.There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 18:21:26 UTC

Technical Analysis

CVE-2022-35990 is a vulnerability identified in the TensorFlow machine learning platform, specifically within the function `tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient`. This function is designed to perform quantization operations with per-channel minimum and maximum variables during gradient computation. The vulnerability arises when the inputs `min` or `max` parameters have a rank other than 1. Under these conditions, the function triggers a `CHECK` failure, which is an assertion failure in the TensorFlow codebase. This assertion failure causes the process to terminate unexpectedly, resulting in a denial of service (DoS) condition. The root cause is classified under CWE-617 (Reachable Assertion), indicating that an assertion can be triggered by crafted inputs, leading to service disruption. The issue affects multiple TensorFlow versions: all versions prior to 2.7.2, versions from 2.8.0 up to but not including 2.8.1, and versions from 2.9.0 up to but not including 2.9.1. The vulnerability has been patched in TensorFlow 2.10.0 and backported to versions 2.7.2, 2.8.1, and 2.9.1. No known exploits have been reported in the wild, and no workarounds exist. Exploitation requires supplying malformed inputs to the vulnerable function, which may be possible in environments where untrusted or malformed data is processed by TensorFlow models. The impact is limited to denial of service through process crashes rather than code execution or data compromise.

Potential Impact

For European organizations, the primary impact of this vulnerability is service disruption in environments that utilize TensorFlow for machine learning workloads. This includes research institutions, technology companies, financial services, healthcare providers, and any enterprise leveraging AI/ML for critical operations. A denial of service could interrupt automated decision-making systems, data processing pipelines, or AI-driven applications, potentially causing operational delays and financial losses. Since TensorFlow is widely adopted across industries in Europe, organizations relying on affected versions may experience unexpected crashes if exposed to crafted inputs, especially in multi-tenant or cloud environments where input data may not be fully controlled. However, the vulnerability does not allow for privilege escalation, data leakage, or remote code execution, limiting the impact to availability concerns. The absence of known exploits reduces immediate risk, but unpatched systems remain vulnerable to potential future attacks. The impact is more pronounced in sectors with high reliance on continuous AI/ML services, such as autonomous systems, real-time analytics, and critical infrastructure monitoring.

Mitigation Recommendations

European organizations should prioritize updating TensorFlow installations to versions 2.7.2, 2.8.1, 2.9.1, or later, including 2.10.0 where the patch is integrated. Given no workarounds exist, patching is the primary mitigation strategy. Organizations should audit their AI/ML pipelines to identify usage of the vulnerable function `tf.quantization.fake_quant_with_min_max_vars_per_channel_gradient` and assess whether untrusted or external inputs could reach this function. Implement input validation and sanitization controls upstream to ensure that the `min` and `max` parameters are strictly rank 1 tensors before processing. For environments where patching is delayed, consider isolating TensorFlow workloads to minimize impact of crashes, such as running them in containerized or sandboxed environments with automatic restart mechanisms. Monitoring and alerting on TensorFlow process crashes can provide early detection of exploitation attempts. Additionally, organizations should review their supply chain and third-party AI services for TensorFlow usage and coordinate patching efforts accordingly. Finally, maintain awareness of updates from TensorFlow and security advisories for any emerging exploits or related vulnerabilities.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf42c5

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 6:21:26 PM

Last updated: 8/17/2025, 5:17:07 PM

Views: 9

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats