Skip to main content

CVE-2022-35966: CWE-20: Improper Input Validation in tensorflow tensorflow

Medium
Published: Fri Sep 16 2022 (09/16/2022, 20:35:15 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. If `QuantizedAvgPool` is given `min_input` or `max_input` tensors of a nonzero rank, it results in a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 7cdf9d4d2083b739ec81cfdace546b0c99f50622. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 20:06:40 UTC

Technical Analysis

CVE-2022-35966 is a medium-severity vulnerability in TensorFlow, an open-source machine learning platform widely used for developing and deploying machine learning models. The vulnerability arises from improper input validation (CWE-20) in the `QuantizedAvgPool` operation. Specifically, if the `min_input` or `max_input` tensors provided to this operation have a nonzero rank (i.e., are not scalar or empty tensors as expected), it causes a segmentation fault (segfault). This segfault can be exploited to trigger a denial of service (DoS) attack by crashing the TensorFlow process. The issue affects multiple TensorFlow versions: all versions prior to 2.7.2, versions between 2.8.0 and 2.8.1, and versions between 2.9.0 and 2.9.1. The vulnerability was patched in GitHub commit 7cdf9d4d2083b739ec81cfdace546b0c99f50622 and included in TensorFlow 2.10.0, with backports planned for 2.7.2, 2.8.1, and 2.9.1. There are currently no known workarounds, and no exploits have been observed in the wild. The vulnerability does not require authentication or user interaction but requires the attacker to supply crafted input tensors to the vulnerable operation, which may be feasible in environments where TensorFlow models are exposed to untrusted inputs or users. The impact is limited to denial of service via process crash, with no direct confidentiality or integrity compromise reported.

Potential Impact

For European organizations, the primary impact of this vulnerability is the potential disruption of machine learning services that rely on affected TensorFlow versions. Organizations using TensorFlow in production environments for critical applications—such as financial services, healthcare, manufacturing, or telecommunications—may experience service outages or degraded performance if an attacker exploits this flaw to cause crashes. This could lead to operational downtime, loss of availability of AI-driven services, and potential financial or reputational damage. Since TensorFlow is widely used in research institutions and enterprises across Europe, especially in AI and data science projects, the vulnerability could affect a broad range of sectors. However, the impact is limited to denial of service and does not directly compromise data confidentiality or integrity. The risk is higher in environments where TensorFlow models are exposed to external or untrusted inputs, such as public APIs, cloud-hosted ML services, or collaborative platforms. Organizations relying on internal, controlled inputs may have a lower risk profile. Additionally, the absence of known exploits in the wild reduces immediate threat urgency but does not eliminate the risk of future exploitation.

Mitigation Recommendations

1. Upgrade TensorFlow to version 2.10.0 or later, or apply the relevant patches to versions 2.7.2, 2.8.1, or 2.9.1 as soon as possible to eliminate the vulnerability. 2. Implement strict input validation and sanitization on all inputs to TensorFlow models, especially those involving `QuantizedAvgPool` operations, to ensure that `min_input` and `max_input` tensors have the expected scalar rank before processing. 3. Restrict access to TensorFlow model inference endpoints to trusted users and networks to reduce exposure to crafted malicious inputs. 4. Monitor TensorFlow application logs and system stability for signs of crashes or abnormal terminations that could indicate exploitation attempts. 5. Employ containerization or sandboxing techniques for TensorFlow workloads to isolate crashes and prevent broader system impact. 6. For cloud deployments, leverage cloud provider security features such as Web Application Firewalls (WAFs) and API gateways to filter and validate incoming requests to ML services. 7. Educate development and data science teams about the importance of input validation and timely patching of ML frameworks.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf4055

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 8:06:40 PM

Last updated: 7/30/2025, 7:09:11 AM

Views: 12

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats