Skip to main content

CVE-2022-36003: CWE-617: Reachable Assertion in tensorflow tensorflow

Medium
Published: Fri Sep 16 2022 (09/16/2022, 22:10:21 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. When `RandomPoissonV2` receives large input shape and rates, it gives a `CHECK` fail that can trigger a denial of service attack. We have patched the issue in GitHub commit 552bfced6ce4809db5f3ca305f60ff80dd40c5a3. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 17:23:59 UTC

Technical Analysis

CVE-2022-36003 is a vulnerability identified in TensorFlow, an open-source machine learning platform widely used for developing and deploying machine learning models. The issue is classified under CWE-617, which refers to a reachable assertion vulnerability. Specifically, the vulnerability occurs in the `RandomPoissonV2` operation of TensorFlow. When this operation receives input parameters with large shapes and high rate values, it triggers a `CHECK` failure, which is an internal assertion designed to validate assumptions in the code. This assertion failure causes the TensorFlow process to terminate unexpectedly, leading to a denial of service (DoS) condition. The vulnerability affects multiple TensorFlow versions: all versions prior to 2.7.2, versions between 2.8.0 and 2.8.1, and versions between 2.9.0 and 2.9.1. The issue has been patched in TensorFlow 2.10.0 and backported to supported versions 2.7.2, 2.8.1, and 2.9.1. There are currently no known workarounds for this vulnerability, and no exploits have been reported in the wild. The vulnerability does not require authentication or user interaction to be triggered, as it can be exploited by supplying crafted inputs to the vulnerable TensorFlow operation. The impact is primarily a denial of service, where the affected TensorFlow process crashes, potentially disrupting machine learning workflows or services relying on TensorFlow for inference or training. This vulnerability is particularly relevant for environments where TensorFlow is exposed to untrusted input or where availability is critical. Since TensorFlow is often integrated into larger systems or cloud services, the reach of this vulnerability can extend beyond standalone applications to complex ML pipelines and services.

Potential Impact

For European organizations, the impact of CVE-2022-36003 can be significant depending on their reliance on TensorFlow for machine learning workloads. Organizations in sectors such as finance, healthcare, automotive, and telecommunications, which increasingly use AI/ML models for critical decision-making, risk operational disruptions if their TensorFlow instances crash unexpectedly. Denial of service in ML pipelines can delay data processing, model training, or inference, potentially affecting service availability and business continuity. Cloud service providers and managed ML platforms operating in Europe that offer TensorFlow-based services may also face service outages or degraded performance. Additionally, organizations that expose TensorFlow-based APIs or services to external users or partners might be vulnerable to DoS attacks if input validation is insufficient. While the vulnerability does not lead to data breaches or code execution, the loss of availability can have cascading effects, especially in real-time or safety-critical applications. The lack of known exploits reduces immediate risk, but the absence of workarounds means that unpatched systems remain vulnerable. Given the widespread adoption of TensorFlow in European research institutions, enterprises, and public sector organizations, the potential for disruption is non-trivial.

Mitigation Recommendations

To mitigate CVE-2022-36003, European organizations should prioritize upgrading TensorFlow installations to version 2.10.0 or later, or apply the backported patches available in versions 2.7.2, 2.8.1, and 2.9.1. Since no workarounds exist, patching is the most effective measure. Organizations should audit their environments to identify all TensorFlow deployments, including embedded instances in third-party applications or cloud services. For environments where immediate patching is not feasible, implementing input validation and sanitization on data fed into the `RandomPoissonV2` operation can reduce the risk of triggering the assertion failure. Specifically, limiting the size of input shapes and the range of rate parameters to safe thresholds can help prevent the DoS condition. Monitoring TensorFlow logs and application health metrics for unexpected crashes or assertion failures can provide early warning signs of exploitation attempts. For cloud or containerized deployments, leveraging orchestration tools to automatically restart failed TensorFlow services can reduce downtime. Finally, organizations should engage with their ML platform vendors or cloud providers to confirm that patched TensorFlow versions are in use and to understand any additional mitigations implemented at the service level.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf433f

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 5:23:59 PM

Last updated: 7/21/2025, 8:43:26 AM

Views: 6

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats