Skip to main content

CVE-2022-23582: CWE-617: Reachable Assertion in tensorflow tensorflow

Medium
Published: Fri Feb 04 2022 (02/04/2022, 22:32:17 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

Tensorflow is an Open Source Machine Learning Framework. A malicious user can cause a denial of service by altering a `SavedModel` such that `TensorByteSize` would trigger `CHECK` failures. `TensorShape` constructor throws a `CHECK`-fail if shape is partial or has a number of elements that would overflow the size of an `int`. The `PartialTensorShape` constructor instead does not cause a `CHECK`-abort if the shape is partial, which is exactly what this function needs to be able to return `-1`. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.

AI-Powered Analysis

AILast updated: 06/22/2025, 03:52:01 UTC

Technical Analysis

CVE-2022-23582 is a medium-severity vulnerability in TensorFlow, an open-source machine learning framework widely used for developing and deploying machine learning models. The vulnerability arises from a reachable assertion failure (CWE-617) triggered by specially crafted SavedModel files. Specifically, the issue occurs when a malicious user alters a SavedModel such that the TensorByteSize calculation triggers CHECK failures. The root cause lies in the TensorShape constructor, which performs CHECK assertions that abort execution if the shape is partial or if the number of elements would overflow the size of an integer. In contrast, the PartialTensorShape constructor does not abort on partial shapes, allowing it to return -1 safely. An attacker can exploit this inconsistency by providing a malformed SavedModel with partial or oversized tensor shapes, causing the TensorFlow process to abort unexpectedly, resulting in a denial of service (DoS). This vulnerability affects TensorFlow versions >= 2.7.0 and < 2.7.1, >= 2.6.0 and < 2.6.3, and all versions below 2.5.3. The issue was addressed in TensorFlow 2.8.0, with backported fixes for 2.7.1, 2.6.3, and 2.5.3. No known exploits have been reported in the wild. The vulnerability requires the attacker to supply a malicious SavedModel file, which implies some level of user interaction or input acceptance by the vulnerable system. The impact is limited to denial of service, as the assertion failure causes the TensorFlow process to terminate unexpectedly, potentially disrupting machine learning workflows or services relying on TensorFlow for inference or training.

Potential Impact

For European organizations, the primary impact of this vulnerability is the potential disruption of machine learning services that use affected TensorFlow versions. Organizations relying on TensorFlow for critical applications—such as financial institutions using ML for fraud detection, healthcare providers using ML for diagnostics, or manufacturing firms employing ML for predictive maintenance—may experience service outages or degraded performance due to unexpected process termination. This could lead to operational delays, loss of productivity, and in some cases, impact decision-making processes that depend on real-time ML outputs. Since the vulnerability results in denial of service rather than data compromise, the confidentiality and integrity of data are not directly threatened. However, availability issues could indirectly affect business continuity and service reliability. The risk is higher in environments where TensorFlow models are loaded from untrusted sources or where user-supplied models are accepted without strict validation. Additionally, automated ML pipelines that ingest external models could be vulnerable to disruption. Given the growing adoption of TensorFlow across various sectors in Europe, the vulnerability could affect a broad range of industries, especially those with advanced AI/ML deployments.

Mitigation Recommendations

European organizations should take the following specific mitigation steps: 1) Upgrade TensorFlow to version 2.8.0 or later, or apply the backported patches for versions 2.7.1, 2.6.3, and 2.5.3 to ensure the vulnerability is fixed. 2) Implement strict validation and sanitization of all SavedModel files before loading them into TensorFlow environments, especially if models originate from external or untrusted sources. This can include schema validation, size checks, and shape verification to detect malformed or malicious models. 3) Restrict the acceptance of user-supplied models to trusted users or isolated environments to minimize exposure. 4) Employ runtime monitoring and alerting for unexpected TensorFlow process crashes or assertion failures to enable rapid detection and response. 5) Where feasible, sandbox TensorFlow execution environments to contain potential denial of service impacts and prevent cascading failures in critical systems. 6) Review and update incident response plans to include scenarios involving ML framework disruptions. 7) Educate development and operations teams about the risks of loading untrusted models and the importance of applying security patches promptly.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-01-19T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9848c4522896dcbf61bf

Added to database: 5/21/2025, 9:09:28 AM

Last enriched: 6/22/2025, 3:52:01 AM

Last updated: 8/12/2025, 1:31:38 PM

Views: 11

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats