Skip to main content

CVE-2022-23564: CWE-617: Reachable Assertion in tensorflow tensorflow

Medium
Published: Fri Feb 04 2022 (02/04/2022, 22:32:41 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

Tensorflow is an Open Source Machine Learning Framework. When decoding a resource handle tensor from protobuf, a TensorFlow process can encounter cases where a `CHECK` assertion is invalidated based on user controlled arguments. This allows attackers to cause denial of services in TensorFlow processes. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.

AI-Powered Analysis

AILast updated: 06/23/2025, 16:48:32 UTC

Technical Analysis

CVE-2022-23564 is a medium-severity vulnerability in TensorFlow, an open-source machine learning framework widely used for developing and deploying machine learning models. The vulnerability arises from a reachable assertion failure (CWE-617) during the decoding of a resource handle tensor from a protobuf message. Specifically, when TensorFlow processes user-controlled arguments to decode resource handle tensors, an internal CHECK assertion can be invalidated, causing the TensorFlow process to crash. This results in a denial of service (DoS) condition. The issue affects TensorFlow versions prior to 2.7.1 (specifically >=2.7.0 and <2.7.1), versions >=2.6.0 and <2.6.3, and versions below 2.5.3. The vulnerability does not require authentication or complex user interaction beyond supplying crafted protobuf data to the TensorFlow process. No known exploits have been reported in the wild. The fix for this vulnerability is included starting with TensorFlow 2.8.0 and backported to supported versions 2.7.1, 2.6.3, and 2.5.3. The root cause is an unchecked assumption in the protobuf decoding logic that leads to an assertion failure when processing malicious input, which terminates the TensorFlow process abruptly, causing denial of service. This vulnerability primarily impacts environments where TensorFlow processes untrusted or user-supplied protobuf data, such as cloud-based ML serving platforms, shared ML infrastructure, or automated pipelines ingesting external data. Since TensorFlow is widely used in research, industry, and cloud services, this vulnerability could disrupt ML workloads and services relying on affected versions.

Potential Impact

For European organizations, the primary impact of CVE-2022-23564 is denial of service against TensorFlow-based services or applications. Organizations using affected TensorFlow versions in production environments—especially those exposing ML model serving endpoints or pipelines that accept external protobuf inputs—may experience service outages or interruptions. This can degrade availability of critical AI/ML-driven applications such as predictive analytics, automated decision-making, or customer-facing AI services. While the vulnerability does not directly lead to data confidentiality or integrity breaches, the disruption of ML services could impact business operations, cause financial losses, and reduce trust in AI systems. Organizations in sectors with heavy AI adoption, such as finance, telecommunications, automotive, and healthcare, may be particularly affected. Additionally, denial of service in shared or multi-tenant ML infrastructure could have cascading effects on multiple users or departments. Given the increasing reliance on AI/ML in European digital transformation initiatives, mitigating this vulnerability is important to maintain service continuity and operational resilience.

Mitigation Recommendations

1. Upgrade TensorFlow to version 2.8.0 or later, or apply the backported patches available in versions 2.7.1, 2.6.3, or 2.5.3 as appropriate. 2. Implement strict input validation and sanitization on all protobuf data received by TensorFlow processes, especially if inputs originate from untrusted or external sources. 3. Where feasible, isolate TensorFlow workloads processing external inputs in sandboxed or containerized environments to limit impact of crashes. 4. Employ monitoring and alerting on TensorFlow process crashes or abnormal terminations to detect exploitation attempts early. 5. For organizations using managed ML services, verify with providers that TensorFlow versions are patched or that mitigations are in place. 6. Review and harden ML model serving architectures to minimize exposure of protobuf decoding endpoints to untrusted users. 7. Conduct regular security assessments of ML pipelines and update dependency management practices to promptly incorporate security patches for TensorFlow and related libraries. These steps go beyond generic advice by emphasizing input validation, process isolation, and operational monitoring tailored to the nature of this protobuf decoding assertion vulnerability.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-01-19T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9842c4522896dcbf250b

Added to database: 5/21/2025, 9:09:22 AM

Last enriched: 6/23/2025, 4:48:32 PM

Last updated: 8/2/2025, 10:51:10 PM

Views: 10

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats