CVE-2022-23583: CWE-617: Reachable Assertion in tensorflow tensorflow
Tensorflow is an Open Source Machine Learning Framework. A malicious user can cause a denial of service by altering a `SavedModel` such that any binary op would trigger `CHECK` failures. This occurs when the protobuf part corresponding to the tensor arguments is modified such that the `dtype` no longer matches the `dtype` expected by the op. In that case, calling the templated binary operator for the binary op would receive corrupted data, due to the type confusion involved. If `Tin` and `Tout` don't match the type of data in `out` and `input_*` tensors then `flat<*>` would interpret it wrongly. In most cases, this would be a silent failure, but we have noticed scenarios where this results in a `CHECK` crash, hence a denial of service. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.
AI Analysis
Technical Summary
CVE-2022-23583 is a medium severity vulnerability affecting TensorFlow, an open-source machine learning framework widely used for developing and deploying machine learning models. The vulnerability arises from a reachable assertion failure (CWE-617) caused by type confusion in the handling of tensor data types within TensorFlow's SavedModel format. Specifically, a malicious actor can craft or alter a SavedModel protobuf such that the data type (`dtype`) of tensor arguments does not match the expected data type for a binary operation (binary op). When TensorFlow attempts to execute this binary operation, the templated binary operator receives corrupted data due to this type mismatch. This leads to incorrect interpretation of tensor data by the `flat<*>` accessor, which can cause a `CHECK` assertion failure in the TensorFlow runtime. The assertion failure results in a denial of service (DoS) by crashing the TensorFlow process. This vulnerability affects TensorFlow versions 2.5.3 and earlier, as well as versions 2.6.0 up to but not including 2.6.3, and versions 2.7.0 up to but not including 2.7.1. The issue is addressed in TensorFlow 2.8.0 and backported to supported earlier versions. No known exploits have been reported in the wild. The vulnerability requires a maliciously crafted SavedModel to be loaded and executed, which implies some level of user interaction or model ingestion by the target system. The impact is primarily denial of service, as the corrupted model causes the TensorFlow runtime to crash due to failed internal consistency checks. Confidentiality and integrity of data are not directly impacted by this vulnerability, but availability of TensorFlow-based services can be disrupted.
Potential Impact
For European organizations leveraging TensorFlow in production environments—such as AI research institutions, technology companies, financial services, healthcare providers, and industrial automation firms—this vulnerability poses a risk of service disruption. Since TensorFlow is commonly used for critical machine learning workloads, a denial of service could interrupt automated decision-making, data analysis, or AI-driven applications. This could lead to operational downtime, loss of productivity, and potential financial losses. Organizations that accept or deploy third-party machine learning models are particularly at risk if they do not validate model integrity before loading. The vulnerability does not allow remote code execution or data leakage, so the primary impact is availability degradation. However, in environments where TensorFlow is integrated into larger systems, repeated crashes could cascade into broader system instability. Given the increasing adoption of AI and ML in European industries, the threat could affect sectors reliant on continuous AI service availability. The lack of known exploits reduces immediate risk, but the vulnerability should be addressed proactively to prevent potential exploitation.
Mitigation Recommendations
1. Upgrade TensorFlow to version 2.8.0 or later, or apply the backported patches available for versions 2.7.1, 2.6.3, and 2.5.3 to ensure the vulnerability is fixed. 2. Implement strict validation and integrity checks on all SavedModel files before loading them into TensorFlow environments. This includes verifying the consistency of tensor data types and rejecting models with suspicious or malformed protobuf data. 3. Restrict the sources from which models can be loaded to trusted repositories or internal development pipelines to prevent ingestion of maliciously crafted models. 4. Employ runtime monitoring and alerting for TensorFlow process crashes or assertion failures to detect potential exploitation attempts early. 5. In environments where TensorFlow is exposed to external inputs, consider sandboxing or isolating TensorFlow execution to limit the impact of a crash on the broader system. 6. Educate data scientists and ML engineers about the risks of loading untrusted models and enforce secure model management practices. 7. Review and update incident response plans to include scenarios involving AI/ML service disruptions due to vulnerabilities like this one.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Italy, Spain, Poland, Belgium
CVE-2022-23583: CWE-617: Reachable Assertion in tensorflow tensorflow
Description
Tensorflow is an Open Source Machine Learning Framework. A malicious user can cause a denial of service by altering a `SavedModel` such that any binary op would trigger `CHECK` failures. This occurs when the protobuf part corresponding to the tensor arguments is modified such that the `dtype` no longer matches the `dtype` expected by the op. In that case, calling the templated binary operator for the binary op would receive corrupted data, due to the type confusion involved. If `Tin` and `Tout` don't match the type of data in `out` and `input_*` tensors then `flat<*>` would interpret it wrongly. In most cases, this would be a silent failure, but we have noticed scenarios where this results in a `CHECK` crash, hence a denial of service. The fix will be included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range.
AI-Powered Analysis
Technical Analysis
CVE-2022-23583 is a medium severity vulnerability affecting TensorFlow, an open-source machine learning framework widely used for developing and deploying machine learning models. The vulnerability arises from a reachable assertion failure (CWE-617) caused by type confusion in the handling of tensor data types within TensorFlow's SavedModel format. Specifically, a malicious actor can craft or alter a SavedModel protobuf such that the data type (`dtype`) of tensor arguments does not match the expected data type for a binary operation (binary op). When TensorFlow attempts to execute this binary operation, the templated binary operator receives corrupted data due to this type mismatch. This leads to incorrect interpretation of tensor data by the `flat<*>` accessor, which can cause a `CHECK` assertion failure in the TensorFlow runtime. The assertion failure results in a denial of service (DoS) by crashing the TensorFlow process. This vulnerability affects TensorFlow versions 2.5.3 and earlier, as well as versions 2.6.0 up to but not including 2.6.3, and versions 2.7.0 up to but not including 2.7.1. The issue is addressed in TensorFlow 2.8.0 and backported to supported earlier versions. No known exploits have been reported in the wild. The vulnerability requires a maliciously crafted SavedModel to be loaded and executed, which implies some level of user interaction or model ingestion by the target system. The impact is primarily denial of service, as the corrupted model causes the TensorFlow runtime to crash due to failed internal consistency checks. Confidentiality and integrity of data are not directly impacted by this vulnerability, but availability of TensorFlow-based services can be disrupted.
Potential Impact
For European organizations leveraging TensorFlow in production environments—such as AI research institutions, technology companies, financial services, healthcare providers, and industrial automation firms—this vulnerability poses a risk of service disruption. Since TensorFlow is commonly used for critical machine learning workloads, a denial of service could interrupt automated decision-making, data analysis, or AI-driven applications. This could lead to operational downtime, loss of productivity, and potential financial losses. Organizations that accept or deploy third-party machine learning models are particularly at risk if they do not validate model integrity before loading. The vulnerability does not allow remote code execution or data leakage, so the primary impact is availability degradation. However, in environments where TensorFlow is integrated into larger systems, repeated crashes could cascade into broader system instability. Given the increasing adoption of AI and ML in European industries, the threat could affect sectors reliant on continuous AI service availability. The lack of known exploits reduces immediate risk, but the vulnerability should be addressed proactively to prevent potential exploitation.
Mitigation Recommendations
1. Upgrade TensorFlow to version 2.8.0 or later, or apply the backported patches available for versions 2.7.1, 2.6.3, and 2.5.3 to ensure the vulnerability is fixed. 2. Implement strict validation and integrity checks on all SavedModel files before loading them into TensorFlow environments. This includes verifying the consistency of tensor data types and rejecting models with suspicious or malformed protobuf data. 3. Restrict the sources from which models can be loaded to trusted repositories or internal development pipelines to prevent ingestion of maliciously crafted models. 4. Employ runtime monitoring and alerting for TensorFlow process crashes or assertion failures to detect potential exploitation attempts early. 5. In environments where TensorFlow is exposed to external inputs, consider sandboxing or isolating TensorFlow execution to limit the impact of a crash on the broader system. 6. Educate data scientists and ML engineers about the risks of loading untrusted models and enforce secure model management practices. 7. Review and update incident response plans to include scenarios involving AI/ML service disruptions due to vulnerabilities like this one.
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- GitHub_M
- Date Reserved
- 2022-01-19T00:00:00.000Z
- Cisa Enriched
- true
Threat ID: 682d9848c4522896dcbf61c3
Added to database: 5/21/2025, 9:09:28 AM
Last enriched: 6/22/2025, 3:50:12 AM
Last updated: 8/13/2025, 10:13:32 PM
Views: 11
Related Threats
CVE-2025-53948: CWE-415 Double Free in Santesoft Sante PACS Server
HighCVE-2025-52584: CWE-122 Heap-based Buffer Overflow in Ashlar-Vellum Cobalt
HighCVE-2025-46269: CWE-122 Heap-based Buffer Overflow in Ashlar-Vellum Cobalt
HighCVE-2025-54862: CWE-79 Improper Neutralization of Input During Web Page Generation (XSS or 'Cross-site Scripting') in Santesoft Sante PACS Server
MediumCVE-2025-54759: CWE-79 Improper Neutralization of Input During Web Page Generation (XSS or 'Cross-site Scripting') in Santesoft Sante PACS Server
MediumActions
Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.
External Links
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.