Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2022-23565: CWE-617: Reachable Assertion in tensorflow tensorflow

0
Medium
Published: Fri Feb 04 2022 (02/04/2022, 22:32:40 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

Tensorflow is an Open Source Machine Learning Framework. An attacker can trigger denial of service via assertion failure by altering a `SavedModel` on disk such that `AttrDef`s of some operation are duplicated. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.

AI-Powered Analysis

AILast updated: 06/23/2025, 16:48:18 UTC

Technical Analysis

CVE-2022-23565 is a medium-severity vulnerability affecting multiple versions of TensorFlow, an open-source machine learning framework widely used for developing and deploying machine learning models. The vulnerability arises from a reachable assertion failure (CWE-617) triggered when an attacker manipulates a SavedModel file on disk. Specifically, by duplicating the AttrDef entries of certain operations within the SavedModel, an attacker can cause TensorFlow to hit an assertion failure during model loading or execution. This results in a denial of service (DoS) condition, causing the TensorFlow process to crash or terminate unexpectedly. The affected versions include TensorFlow 2.5.0 up to but not including 2.5.3, 2.6.0 up to but not including 2.6.3, and 2.7.0 up to but not including 2.7.1. The vulnerability does not require authentication or user interaction beyond supplying a malicious SavedModel file, which could be loaded by an application using TensorFlow. No known exploits have been reported in the wild to date. The issue is addressed in TensorFlow 2.8.0 and backported patches for 2.7.1, 2.6.3, and 2.5.3. The root cause is improper validation of the SavedModel's internal structure, allowing duplicated attribute definitions to trigger an assertion failure, which is a defensive programming check that unexpectedly terminates the process. This vulnerability primarily impacts availability by causing denial of service but does not directly affect confidentiality or integrity of data or models.

Potential Impact

For European organizations, the primary impact of CVE-2022-23565 is the potential disruption of machine learning services that rely on TensorFlow for model loading and inference. Organizations using vulnerable TensorFlow versions in production environments may experience unexpected crashes or service outages if an attacker supplies or injects a malicious SavedModel file. This could affect sectors such as finance, healthcare, manufacturing, and research institutions that deploy AI/ML models for critical decision-making or automation. Although the vulnerability does not lead to data leakage or model tampering, the denial of service could interrupt business operations, degrade service availability, and erode user trust. Additionally, organizations that share or receive machine learning models from external sources may be at risk if malicious models are introduced. Given the growing reliance on AI/ML in Europe, especially in strategic industries and public sector applications, ensuring TensorFlow environments are patched is essential to maintain operational resilience.

Mitigation Recommendations

1. Upgrade TensorFlow to version 2.8.0 or later, or apply the backported patches available for versions 2.7.1, 2.6.3, and 2.5.3 to remediate the vulnerability. 2. Implement strict validation and integrity checks on all SavedModel files before loading them into TensorFlow environments, including verifying source authenticity and using cryptographic signatures where possible. 3. Restrict the ability to upload or modify SavedModel files to trusted users and processes only, minimizing the risk of malicious model injection. 4. Monitor TensorFlow application logs and system behavior for unexpected assertion failures or crashes that may indicate exploitation attempts. 5. Employ sandboxing or containerization to isolate TensorFlow workloads, limiting the impact of potential crashes on broader systems. 6. Educate development and operations teams about the risks of loading untrusted models and enforce secure model management policies. 7. For organizations distributing models externally, provide guidance and tools to recipients to verify model integrity and encourage patching of TensorFlow dependencies.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-01-19T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9842c4522896dcbf2518

Added to database: 5/21/2025, 9:09:22 AM

Last enriched: 6/23/2025, 4:48:18 PM

Last updated: 2/7/2026, 11:15:34 AM

Views: 61

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats