Skip to main content

CVE-2022-23572: CWE-754: Improper Check for Unusual or Exceptional Conditions in tensorflow tensorflow

Medium
Published: Fri Feb 04 2022 (02/04/2022, 22:32:29 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

Tensorflow is an Open Source Machine Learning Framework. Under certain scenarios, TensorFlow can fail to specialize a type during shape inference. This case is covered by the `DCHECK` function however, `DCHECK` is a no-op in production builds and an assertion failure in debug builds. In the first case execution proceeds to the `ValueOrDie` line. This results in an assertion failure as `ret` contains an error `Status`, not a value. In the second case we also get a crash due to the assertion failure. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, and TensorFlow 2.6.3, as these are also affected and still in supported range.

AI-Powered Analysis

AILast updated: 06/22/2025, 04:06:47 UTC

Technical Analysis

CVE-2022-23572 is a medium-severity vulnerability affecting TensorFlow, an open-source machine learning framework widely used for developing and deploying machine learning models. The vulnerability arises from improper handling of unusual or exceptional conditions during shape inference, a critical step in TensorFlow's graph compilation process where tensor shapes are determined to optimize computations. Specifically, under certain scenarios, TensorFlow fails to properly specialize a type during shape inference. The code relies on the DCHECK macro to catch this condition. However, DCHECK behaves differently depending on the build configuration: in debug builds, it triggers an assertion failure, causing the program to crash; in production builds, DCHECK is a no-op, allowing execution to continue unchecked. This leads to a subsequent call to ValueOrDie on a Status object that contains an error instead of a valid value, resulting in an assertion failure and a crash. The vulnerability affects TensorFlow versions prior to 2.5.3, versions between 2.6.0 and before 2.6.3, and versions between 2.7.0 and before 2.7.1. The issue has been addressed in TensorFlow 2.8.0, with backported fixes planned for 2.7.1 and 2.6.3. There are no known exploits in the wild at this time. The root cause is classified under CWE-754, which involves improper checks for unusual or exceptional conditions, leading to potential crashes or denial of service. Because the vulnerability triggers crashes either via assertion failures in debug mode or unhandled error states in production, it primarily impacts availability rather than confidentiality or integrity. Exploitation does not require authentication or user interaction, but the scope is limited to environments running vulnerable TensorFlow versions and executing specific shape inference scenarios that trigger the flaw.

Potential Impact

For European organizations leveraging TensorFlow in production environments—particularly those deploying machine learning models in critical applications such as finance, healthcare, manufacturing, or autonomous systems—this vulnerability poses a risk of denial of service due to unexpected crashes. Such crashes can interrupt model training or inference pipelines, leading to downtime, degraded service availability, or failed automated decision-making processes. While the vulnerability does not directly compromise data confidentiality or integrity, the disruption of machine learning workflows can have downstream operational impacts, including delayed analytics, compromised automation, and potential financial loss. Organizations relying on TensorFlow in debug or development environments may experience assertion failures that hinder testing and development cycles. Given the increasing adoption of AI/ML technologies across European industries, the availability impact could affect sectors with high reliance on real-time or batch ML processing. However, since exploitation requires triggering specific shape inference conditions, the practical impact may be limited to particular workloads or models. No known active exploitation reduces immediate risk, but unpatched systems remain vulnerable to potential denial of service attacks or accidental crashes.

Mitigation Recommendations

1. Upgrade TensorFlow to version 2.8.0 or later, or apply the backported patches available for versions 2.7.1 and 2.6.3 to ensure the vulnerability is remediated. 2. Review and audit machine learning pipelines to identify usage of affected TensorFlow versions, prioritizing production environments and critical workloads. 3. Implement robust monitoring and alerting for TensorFlow process crashes or abnormal terminations to detect potential exploitation or accidental triggering of the vulnerability. 4. Where upgrading is not immediately feasible, consider isolating TensorFlow workloads in containerized or sandboxed environments to limit the impact of crashes on broader systems. 5. Conduct thorough testing of machine learning models and shape inference scenarios in controlled environments to identify any conditions that might trigger the vulnerability, allowing preemptive adjustments or workarounds. 6. Educate development and operations teams about the importance of using production builds with proper error handling rather than relying on debug builds that may crash unexpectedly. 7. Stay informed about TensorFlow security advisories and apply patches promptly as part of a continuous vulnerability management process.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-01-19T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9848c4522896dcbf616a

Added to database: 5/21/2025, 9:09:28 AM

Last enriched: 6/22/2025, 4:06:47 AM

Last updated: 8/6/2025, 12:57:08 AM

Views: 11

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats