Skip to main content

CVE-2022-35979: CWE-20: Improper Input Validation in tensorflow tensorflow

Medium
Published: Fri Sep 16 2022 (09/16/2022, 21:10:10 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. If `QuantizedRelu` or `QuantizedRelu6` are given nonscalar inputs for `min_features` or `max_features`, it results in a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 49b3824d83af706df0ad07e4e677d88659756d89. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 19:51:23 UTC

Technical Analysis

CVE-2022-35979 is a medium-severity vulnerability in TensorFlow, an open-source machine learning platform widely used for developing and deploying machine learning models. The vulnerability arises from improper input validation (CWE-20) in the handling of the QuantizedRelu and QuantizedRelu6 operations. Specifically, when these operations receive nonscalar inputs for the parameters min_features or max_features, it causes a segmentation fault (segfault). This segfault can be exploited to trigger a denial of service (DoS) attack by crashing the TensorFlow process. The issue affects multiple TensorFlow versions: all versions prior to 2.7.2, versions from 2.8.0 up to but not including 2.8.1, and versions from 2.9.0 up to but not including 2.9.1. The vulnerability was patched in GitHub commit 49b3824d83af706df0ad07e4e677d88659756d89, with fixes included in TensorFlow 2.10.0 and backported to 2.9.1, 2.8.1, and 2.7.2. There are no known workarounds, and no exploits have been observed in the wild to date. The vulnerability does not require authentication or user interaction to be triggered, but it requires an attacker to supply crafted inputs to the affected TensorFlow operations, which typically occurs in environments where TensorFlow is exposed to untrusted input or where models are dynamically loaded or executed with external data. The impact is limited to denial of service via process crash, with no direct evidence of code execution or data corruption. However, the disruption of machine learning services or pipelines can have operational consequences, especially in critical or production environments.

Potential Impact

For European organizations, the primary impact of this vulnerability is the potential disruption of machine learning services that rely on affected TensorFlow versions. Organizations in sectors such as finance, healthcare, manufacturing, and telecommunications that use TensorFlow for predictive analytics, automated decision-making, or real-time data processing may experience service outages or degraded performance due to crashes triggered by this vulnerability. While the vulnerability does not lead to data breaches or unauthorized code execution, denial of service can interrupt critical workflows, delay processing, and cause financial or reputational damage. Additionally, organizations that provide machine learning as a service or deploy TensorFlow models in cloud or edge environments may face increased risk if untrusted inputs are processed without adequate validation. The lack of known exploits reduces immediate risk, but the widespread use of TensorFlow and the absence of workarounds mean that unpatched systems remain vulnerable to potential future attacks. The impact is more pronounced in environments where TensorFlow is exposed to external inputs or integrated into automated pipelines without strict input validation controls.

Mitigation Recommendations

European organizations should prioritize upgrading TensorFlow installations to versions 2.7.2, 2.8.1, 2.9.1, or later, including 2.10.0, where the vulnerability is patched. Since no workarounds exist, patching is the primary mitigation. Additionally, organizations should implement strict input validation and sanitization on all data fed into TensorFlow models, especially for parameters related to QuantizedRelu and QuantizedRelu6 operations. Deploying runtime monitoring and anomaly detection to identify unexpected crashes or segmentation faults in machine learning services can help detect exploitation attempts early. For environments where upgrading is not immediately feasible, isolating TensorFlow workloads in sandboxed or containerized environments can limit the impact of crashes on broader systems. Organizations should also review and harden access controls to TensorFlow model endpoints, ensuring that only trusted users or systems can supply inputs to vulnerable operations. Finally, integrating TensorFlow usage into existing security incident and event management (SIEM) systems can improve visibility and response capabilities.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf40b8

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 7:51:23 PM

Last updated: 8/12/2025, 3:45:26 PM

Views: 10

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats