Skip to main content

CVE-2022-21731: n/a in n/a

Medium
VulnerabilityCVE-2022-21731cvecve-2022-21731
Published: Thu Feb 03 2022 (02/03/2022, 11:37:56 UTC)
Source: CVE
Vendor/Project: n/a
Product: n/a

Description

Tensorflow is an Open Source Machine Learning Framework. The implementation of shape inference for `ConcatV2` can be used to trigger a denial of service attack via a segfault caused by a type confusion. The `axis` argument is translated into `concat_dim` in the `ConcatShapeHelper` helper function. Then, a value for `min_rank` is computed based on `concat_dim`. This is then used to validate that the `values` tensor has at least the required rank. However, `WithRankAtLeast` receives the lower bound as a 64-bits value and then compares it against the maximum 32-bits integer value that could be represented. Due to the fact that `min_rank` is a 32-bits value and the value of `axis`, the `rank` argument is a negative value, so the error check is bypassed. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.

AI-Powered Analysis

AILast updated: 07/06/2025, 23:26:08 UTC

Technical Analysis

CVE-2022-21731 is a medium-severity vulnerability in TensorFlow, an open-source machine learning framework widely used for developing and deploying ML models. The flaw resides in the shape inference implementation for the ConcatV2 operation, which concatenates tensors along a specified axis. Specifically, the vulnerability arises due to a type confusion and improper validation in the handling of the 'axis' argument. The 'axis' parameter is converted into 'concat_dim' within the ConcatShapeHelper function, which then computes a 'min_rank' value used to validate the rank of the input tensors. However, the validation function WithRankAtLeast compares a 64-bit lower bound against a 32-bit maximum integer, leading to an integer overflow or underflow scenario. This causes the 'rank' argument to become negative, effectively bypassing the rank validation check. As a result, an attacker can craft inputs that trigger a segmentation fault (segfault), causing a denial of service (DoS) by crashing the TensorFlow process. The issue affects multiple TensorFlow versions, including 2.5.3, 2.6.3, 2.7.1, and will be fixed in 2.8.0. The vulnerability does not impact confidentiality or integrity but affects availability by enabling DoS attacks. Exploitation requires the ability to supply malicious inputs to a TensorFlow instance, which typically implies some level of access or interaction with the ML service or environment. No known exploits are currently reported in the wild. The underlying weakness is classified as CWE-843 (Access of Resource Using Incompatible Type). The CVSS v3.1 base score is 6.5, reflecting medium severity with network attack vector, low attack complexity, requiring privileges, no user interaction, and impact limited to availability.

Potential Impact

For European organizations, the primary impact of this vulnerability is the potential for denial of service attacks against systems running vulnerable TensorFlow versions. Organizations using TensorFlow for critical machine learning workloads—such as financial institutions employing ML for fraud detection, healthcare providers using ML for diagnostics, or industrial firms leveraging ML for automation—may experience service disruptions if exploited. This could lead to downtime, loss of productivity, and potential cascading effects on dependent systems. Since the vulnerability does not compromise data confidentiality or integrity, the risk of data breaches is low. However, availability interruptions in ML-driven services could degrade operational capabilities and customer trust. The requirement for some level of privilege to exploit reduces the risk from external attackers but raises concerns for insider threats or compromised internal systems. Given the widespread adoption of TensorFlow in research, academia, and industry across Europe, the vulnerability poses a moderate operational risk, especially in environments where ML services are exposed or integrated into critical workflows.

Mitigation Recommendations

European organizations should take the following specific mitigation steps: 1) Identify all TensorFlow deployments and determine the versions in use, focusing on versions 2.5.3 through 2.7.1, which are confirmed vulnerable. 2) Apply patches or upgrade to TensorFlow 2.8.0 or later, where the fix is included. If immediate upgrade is not feasible, backport the relevant patch from the TensorFlow repository to affected versions. 3) Restrict access to TensorFlow services to trusted users and systems to minimize the risk of malicious input injection. 4) Implement input validation and sanitization at the application layer to prevent malformed tensors from reaching the vulnerable ConcatV2 operation. 5) Monitor TensorFlow service logs and system stability for signs of crashes or abnormal behavior indicative of attempted exploitation. 6) Employ runtime protections such as containerization or sandboxing to isolate TensorFlow processes and limit the impact of crashes. 7) Educate developers and ML engineers about secure coding practices and the importance of timely patching in ML frameworks. These targeted actions go beyond generic advice by focusing on version management, access control, input validation, and operational monitoring specific to TensorFlow environments.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2021-11-16T00:00:00.000Z
Cisa Enriched
true
Cvss Version
3.1
State
PUBLISHED

Threat ID: 682d981ec4522896dcbdbec9

Added to database: 5/21/2025, 9:08:46 AM

Last enriched: 7/6/2025, 11:26:08 PM

Last updated: 8/11/2025, 8:16:06 PM

Views: 14

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats