Skip to main content

CVE-2022-23588: CWE-617: Reachable Assertion in tensorflow tensorflow

Medium
Published: Fri Feb 04 2022 (02/04/2022, 22:32:21 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

Tensorflow is an Open Source Machine Learning Framework. A malicious user can cause a denial of service by altering a `SavedModel` such that Grappler optimizer would attempt to build a tensor using a reference `dtype`. This would result in a crash due to a `CHECK`-fail in the `Tensor` constructor as reference types are not allowed. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.

AI-Powered Analysis

AILast updated: 06/22/2025, 03:37:26 UTC

Technical Analysis

CVE-2022-23588 is a medium-severity vulnerability affecting multiple versions of TensorFlow, an open-source machine learning framework widely used for developing and deploying machine learning models. The vulnerability arises from a reachable assertion failure (CWE-617) within TensorFlow's Grappler optimizer component. Specifically, a malicious actor can craft a specially altered SavedModel file such that when TensorFlow attempts to optimize the model graph, Grappler tries to build a tensor using a reference data type (dtype). Reference types are not permitted in this context, causing a CHECK-fail assertion in the Tensor constructor, which leads to a crash of the TensorFlow process. This effectively results in a denial of service (DoS) condition. The affected versions include TensorFlow versions 2.5.3 and earlier, 2.6.0 up to but not including 2.6.3, and 2.7.0 up to but not including 2.7.1. The issue was addressed in TensorFlow 2.8.0, with backported fixes for versions 2.5.3, 2.6.3, and 2.7.1. No known exploits have been reported in the wild to date. The vulnerability requires the attacker to supply a malicious SavedModel file to the TensorFlow environment, which means exploitation requires some level of interaction or input of crafted data. The impact is primarily a denial of service due to process crash rather than remote code execution or data compromise. The vulnerability affects the integrity and availability of the TensorFlow service but does not directly expose confidentiality risks. The root cause is a lack of validation preventing the use of invalid reference types during tensor construction in the optimization phase.

Potential Impact

For European organizations leveraging TensorFlow for machine learning workloads—especially those deploying models in production environments or providing ML-as-a-service—this vulnerability could lead to service interruptions. A malicious user or attacker able to submit or influence the input SavedModel files could trigger crashes, causing downtime or degraded service availability. This could impact sectors relying on AI/ML for critical functions, such as finance, healthcare, manufacturing, and telecommunications. While the vulnerability does not allow for data exfiltration or privilege escalation, denial of service in AI pipelines could disrupt automated decision-making, analytics, or customer-facing applications. Organizations with automated model retraining or deployment pipelines that accept external model inputs are particularly at risk. The impact is mitigated if model inputs are tightly controlled and validated. However, in collaborative or multi-tenant environments where models are shared or uploaded by multiple users, the risk increases. Given the growing adoption of TensorFlow across European research institutions, enterprises, and cloud providers, the potential for operational disruption is notable but not catastrophic.

Mitigation Recommendations

1. Upgrade TensorFlow to version 2.8.0 or later, or apply the backported patches available for versions 2.5.3, 2.6.3, and 2.7.1 to ensure the vulnerability is remediated. 2. Implement strict validation and sanitization of all SavedModel files before loading or optimization, rejecting any models that contain unexpected or unsupported data types, especially reference types. 3. Restrict the ability to upload or submit SavedModel files to trusted users or systems only, minimizing exposure to malicious inputs. 4. Employ runtime monitoring and alerting for unexpected TensorFlow crashes or restarts, enabling rapid detection and response to potential exploitation attempts. 5. Where feasible, isolate TensorFlow model serving environments to limit the blast radius of a denial of service event, using containerization or dedicated compute resources. 6. Review and harden the ML pipeline to ensure that untrusted inputs cannot directly influence model loading or optimization processes. 7. Engage with TensorFlow community and vendor support channels to stay informed about any emerging exploits or additional patches related to this vulnerability.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-01-19T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9848c4522896dcbf61ec

Added to database: 5/21/2025, 9:09:28 AM

Last enriched: 6/22/2025, 3:37:26 AM

Last updated: 8/18/2025, 11:28:55 PM

Views: 17

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats