Skip to main content

CVE-2022-36001: CWE-617: Reachable Assertion in tensorflow tensorflow

Medium
Published: Fri Sep 16 2022 (09/16/2022, 22:10:10 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. When `DrawBoundingBoxes` receives an input `boxes` that is not of dtype `float`, it gives a `CHECK` fail that can trigger a denial of service attack. We have patched the issue in GitHub commit da0d65cdc1270038e72157ba35bf74b85d9bda11. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 17:34:41 UTC

Technical Analysis

CVE-2022-36001 is a medium-severity vulnerability in TensorFlow, an open-source machine learning platform widely used for developing and deploying machine learning models. The issue arises in the `DrawBoundingBoxes` function, which expects the input parameter `boxes` to be of data type `float`. If an input of a different data type is provided, the function triggers a `CHECK` failure, which is an assertion that halts program execution. This behavior can be exploited to cause a denial of service (DoS) by crashing the application or service that relies on TensorFlow for machine learning tasks. The vulnerability is classified under CWE-617 (Reachable Assertion), indicating that an assertion can be triggered by external input, leading to a program crash or abnormal termination. Affected versions include TensorFlow versions prior to 2.7.2, versions 2.8.0 up to but not including 2.8.1, and versions 2.9.0 up to but not including 2.9.1. The issue was patched in commit da0d65cdc1270038e72157ba35bf74b85d9bda11 and is included in TensorFlow 2.10.0, with backports planned for 2.7.2, 2.8.1, and 2.9.1. There are currently no known workarounds, and no exploits have been observed in the wild. The vulnerability requires that an attacker can supply malformed input to the `DrawBoundingBoxes` function, which typically would require some level of access to the machine learning pipeline or the ability to influence input data. The impact is primarily denial of service, as the assertion failure stops the TensorFlow process, potentially disrupting services that depend on it.

Potential Impact

For European organizations, the impact of CVE-2022-36001 depends on the extent to which TensorFlow is integrated into their machine learning workflows and production environments. Organizations using TensorFlow for critical applications such as healthcare diagnostics, financial modeling, autonomous systems, or industrial automation could experience service interruptions if this vulnerability is exploited. A denial of service in these contexts could lead to operational downtime, loss of productivity, and potential financial losses. Additionally, organizations providing machine learning as a service (MLaaS) or deploying TensorFlow models in cloud or edge environments may face availability issues that affect end-users. Since the vulnerability requires malformed input to trigger, environments where external or untrusted data is processed by TensorFlow are at higher risk. However, the vulnerability does not directly compromise confidentiality or integrity, limiting the impact to availability. The absence of known exploits reduces immediate risk, but the lack of workarounds means that unpatched systems remain vulnerable. Given the growing adoption of AI and machine learning in Europe, especially in sectors like automotive, finance, and healthcare, this vulnerability could affect a broad range of organizations if not addressed promptly.

Mitigation Recommendations

To mitigate CVE-2022-36001, European organizations should prioritize upgrading TensorFlow to version 2.10.0 or later, or apply the backported patches for versions 2.7.2, 2.8.1, and 2.9.1 as soon as they become available. Since no workarounds exist, patching is the primary defense. Organizations should audit their machine learning pipelines to identify any components that use the `DrawBoundingBoxes` function or process bounding box inputs, ensuring that input data types are validated and sanitized before being passed to TensorFlow functions. Implementing strict input validation at the application layer can reduce the risk of malformed inputs reaching TensorFlow. Additionally, deploying runtime monitoring and anomaly detection to identify unexpected crashes or assertion failures in TensorFlow processes can help detect exploitation attempts early. For environments where TensorFlow is exposed to external or untrusted inputs, consider isolating these processes using containerization or sandboxing to limit the impact of potential crashes. Finally, organizations should review their incident response plans to include scenarios involving machine learning service disruptions and ensure that backups and failover mechanisms are in place to maintain availability.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf4337

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 5:34:41 PM

Last updated: 8/15/2025, 12:26:05 PM

Views: 13

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats