Skip to main content

CVE-2022-35983: CWE-617: Reachable Assertion in tensorflow tensorflow

Medium
Published: Fri Sep 16 2022 (09/16/2022, 21:40:10 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. If `Save` or `SaveSlices` is run over tensors of an unsupported `dtype`, it results in a `CHECK` fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 5dd7b86b84a864b834c6fa3d7f9f51c87efa99d4. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 19:50:42 UTC

Technical Analysis

CVE-2022-35983 is a vulnerability identified in TensorFlow, an open-source machine learning platform widely used for developing and deploying machine learning models. The issue arises when the `Save` or `SaveSlices` functions are invoked on tensors with unsupported data types (`dtype`). This triggers a `CHECK` failure, which is an assertion failure within the TensorFlow codebase, leading to a program crash. The vulnerability is classified under CWE-617 (Reachable Assertion), indicating that an assertion statement can be triggered by external input, causing the application to terminate unexpectedly. The affected TensorFlow versions include all releases prior to 2.7.2, versions from 2.8.0 up to but not including 2.8.1, and versions from 2.9.0 up to but not including 2.9.1. The issue has been patched in TensorFlow 2.10.0 and backported to versions 2.7.2, 2.8.1, and 2.9.1. There are no known workarounds, and no exploits have been reported in the wild to date. The vulnerability can be triggered by providing tensors of unsupported data types to the save functions, causing a denial of service (DoS) by crashing the TensorFlow process. This affects the availability of services relying on TensorFlow for machine learning tasks, particularly those that perform model saving or checkpointing operations. Since TensorFlow is often embedded in larger applications or services, this vulnerability could disrupt machine learning workflows or production environments that depend on continuous model training or inference pipelines.

Potential Impact

For European organizations, the primary impact of CVE-2022-35983 is a denial of service condition affecting machine learning infrastructure. Organizations utilizing TensorFlow for critical AI/ML workloads—such as financial institutions, healthcare providers, automotive manufacturers, and research institutions—may experience interruptions in model training, evaluation, or deployment processes. This could delay decision-making, degrade service quality, or halt automated systems relying on ML models. The vulnerability does not directly compromise confidentiality or integrity but impacts availability, which can have cascading effects on business operations. Given the growing adoption of AI/ML in sectors like finance, healthcare, and manufacturing across Europe, the disruption potential is significant, especially in environments where TensorFlow is integrated into production pipelines without adequate isolation or failover mechanisms. Additionally, since no authentication or user interaction is required to trigger the assertion failure (assuming an attacker or malformed input can be supplied to the save functions), the risk is elevated in multi-tenant or exposed environments where untrusted inputs might reach TensorFlow components.

Mitigation Recommendations

To mitigate this vulnerability, European organizations should prioritize upgrading TensorFlow installations to version 2.10.0 or later, or apply the backported patches available for versions 2.7.2, 2.8.1, and 2.9.1. Given the absence of workarounds, patching is the most effective defense. Organizations should audit their machine learning pipelines to identify any use of the `Save` or `SaveSlices` functions and validate the data types of tensors being saved to ensure they conform to supported types. Implementing input validation and sanitization at the application layer before invoking TensorFlow save operations can reduce the risk of triggering the assertion. Additionally, deploying TensorFlow services with process isolation and monitoring for unexpected crashes can help detect exploitation attempts early. For critical environments, consider implementing redundancy and failover mechanisms to maintain availability if a TensorFlow process crashes. Finally, restrict access to TensorFlow model saving interfaces to trusted users and systems to minimize exposure to malformed inputs.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf40d9

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 7:50:42 PM

Last updated: 8/11/2025, 9:40:29 PM

Views: 13

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats