CVE-2022-35985: CWE-617: Reachable Assertion in tensorflow tensorflow
TensorFlow is an open source platform for machine learning. If `LRNGrad` is given an `output_image` input tensor that is not 4-D, it results in a `CHECK` fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit bd90b3efab4ec958b228cd7cfe9125be1c0cf255. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.
AI Analysis
Technical Summary
CVE-2022-35985 is a vulnerability identified in TensorFlow, an open-source machine learning platform widely used for developing and deploying machine learning models. The issue arises specifically in the LRNGrad operation, which is part of TensorFlow's Local Response Normalization gradient computation. When the `output_image` input tensor provided to LRNGrad is not four-dimensional (4-D), the system triggers a CHECK failure, which is an assertion designed to validate internal assumptions in the code. This assertion failure leads to a denial of service (DoS) condition by crashing the TensorFlow process. The root cause is classified under CWE-617 (Reachable Assertion), indicating that an assertion can be triggered by external input, causing the program to terminate unexpectedly. The vulnerability affects multiple TensorFlow versions: all versions prior to 2.7.2, versions from 2.8.0 up to but not including 2.8.1, and versions from 2.9.0 up to but not including 2.9.1. The issue was patched in commit bd90b3efab4ec958b228cd7cfe9125be1c0cf255 and incorporated into TensorFlow 2.10.0, with backports planned for 2.9.1, 2.8.1, and 2.7.2. There are currently no known workarounds for this vulnerability, and no exploits have been observed in the wild. The vulnerability requires that an attacker can supply a malformed input tensor to the LRNGrad operation, which may be feasible in environments where TensorFlow processes untrusted or user-supplied data. The impact is limited to denial of service, as the assertion failure causes the process to terminate, but does not allow for code execution or data leakage. The vulnerability does not require authentication or user interaction beyond supplying the malformed tensor input. Given the nature of TensorFlow deployments, this vulnerability primarily affects machine learning pipelines, model training, and inference services that utilize vulnerable TensorFlow versions and accept external input tensors.
Potential Impact
For European organizations, the primary impact of CVE-2022-35985 is the potential disruption of machine learning services and workflows. Organizations relying on TensorFlow for critical AI applications—such as financial institutions using ML for fraud detection, healthcare providers employing AI for diagnostics, or manufacturing firms leveraging predictive maintenance—may experience service outages if an attacker supplies malformed input tensors to vulnerable TensorFlow instances. This denial of service could lead to downtime, delayed processing, and loss of productivity. While the vulnerability does not directly compromise data confidentiality or integrity, the availability impact could indirectly affect business operations and service reliability. Additionally, organizations providing AI-as-a-Service or cloud-based ML platforms in Europe could face reputational damage and customer trust issues if their services are disrupted. Since no known exploits exist in the wild, the immediate risk is moderate; however, the ease of triggering the assertion failure by supplying malformed input means that attackers with access to input channels could exploit this vulnerability. The lack of authentication requirements for triggering the issue increases the risk in environments where TensorFlow processes untrusted data. Overall, the impact is primarily operational, affecting availability rather than data security.
Mitigation Recommendations
To mitigate CVE-2022-35985, European organizations should prioritize upgrading TensorFlow installations to version 2.10.0 or later, or apply the backported patches for versions 2.7.2, 2.8.1, and 2.9.1 as soon as they become available. Since no workarounds exist, patching is the most effective mitigation. Organizations should audit their ML pipelines to identify all TensorFlow instances, including those embedded in containerized environments, cloud services, and edge devices. Restricting access to TensorFlow services that accept input tensors from untrusted sources is critical; implementing strict input validation and sanitization at the application layer can reduce the risk of malformed tensors reaching the vulnerable LRNGrad operation. Employing network segmentation and access controls to limit exposure of ML services to internal trusted networks can further reduce attack surface. Monitoring TensorFlow logs and application behavior for unexpected crashes or assertion failures can help detect attempted exploitation. For organizations using managed ML platforms, verifying that service providers have applied the necessary patches is essential. Finally, incorporating fuzz testing and input validation in ML model deployment pipelines can proactively identify similar vulnerabilities in the future.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Ireland, Belgium, Italy, Spain
CVE-2022-35985: CWE-617: Reachable Assertion in tensorflow tensorflow
Description
TensorFlow is an open source platform for machine learning. If `LRNGrad` is given an `output_image` input tensor that is not 4-D, it results in a `CHECK` fail that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit bd90b3efab4ec958b228cd7cfe9125be1c0cf255. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.
AI-Powered Analysis
Technical Analysis
CVE-2022-35985 is a vulnerability identified in TensorFlow, an open-source machine learning platform widely used for developing and deploying machine learning models. The issue arises specifically in the LRNGrad operation, which is part of TensorFlow's Local Response Normalization gradient computation. When the `output_image` input tensor provided to LRNGrad is not four-dimensional (4-D), the system triggers a CHECK failure, which is an assertion designed to validate internal assumptions in the code. This assertion failure leads to a denial of service (DoS) condition by crashing the TensorFlow process. The root cause is classified under CWE-617 (Reachable Assertion), indicating that an assertion can be triggered by external input, causing the program to terminate unexpectedly. The vulnerability affects multiple TensorFlow versions: all versions prior to 2.7.2, versions from 2.8.0 up to but not including 2.8.1, and versions from 2.9.0 up to but not including 2.9.1. The issue was patched in commit bd90b3efab4ec958b228cd7cfe9125be1c0cf255 and incorporated into TensorFlow 2.10.0, with backports planned for 2.9.1, 2.8.1, and 2.7.2. There are currently no known workarounds for this vulnerability, and no exploits have been observed in the wild. The vulnerability requires that an attacker can supply a malformed input tensor to the LRNGrad operation, which may be feasible in environments where TensorFlow processes untrusted or user-supplied data. The impact is limited to denial of service, as the assertion failure causes the process to terminate, but does not allow for code execution or data leakage. The vulnerability does not require authentication or user interaction beyond supplying the malformed tensor input. Given the nature of TensorFlow deployments, this vulnerability primarily affects machine learning pipelines, model training, and inference services that utilize vulnerable TensorFlow versions and accept external input tensors.
Potential Impact
For European organizations, the primary impact of CVE-2022-35985 is the potential disruption of machine learning services and workflows. Organizations relying on TensorFlow for critical AI applications—such as financial institutions using ML for fraud detection, healthcare providers employing AI for diagnostics, or manufacturing firms leveraging predictive maintenance—may experience service outages if an attacker supplies malformed input tensors to vulnerable TensorFlow instances. This denial of service could lead to downtime, delayed processing, and loss of productivity. While the vulnerability does not directly compromise data confidentiality or integrity, the availability impact could indirectly affect business operations and service reliability. Additionally, organizations providing AI-as-a-Service or cloud-based ML platforms in Europe could face reputational damage and customer trust issues if their services are disrupted. Since no known exploits exist in the wild, the immediate risk is moderate; however, the ease of triggering the assertion failure by supplying malformed input means that attackers with access to input channels could exploit this vulnerability. The lack of authentication requirements for triggering the issue increases the risk in environments where TensorFlow processes untrusted data. Overall, the impact is primarily operational, affecting availability rather than data security.
Mitigation Recommendations
To mitigate CVE-2022-35985, European organizations should prioritize upgrading TensorFlow installations to version 2.10.0 or later, or apply the backported patches for versions 2.7.2, 2.8.1, and 2.9.1 as soon as they become available. Since no workarounds exist, patching is the most effective mitigation. Organizations should audit their ML pipelines to identify all TensorFlow instances, including those embedded in containerized environments, cloud services, and edge devices. Restricting access to TensorFlow services that accept input tensors from untrusted sources is critical; implementing strict input validation and sanitization at the application layer can reduce the risk of malformed tensors reaching the vulnerable LRNGrad operation. Employing network segmentation and access controls to limit exposure of ML services to internal trusted networks can further reduce attack surface. Monitoring TensorFlow logs and application behavior for unexpected crashes or assertion failures can help detect attempted exploitation. For organizations using managed ML platforms, verifying that service providers have applied the necessary patches is essential. Finally, incorporating fuzz testing and input validation in ML model deployment pipelines can proactively identify similar vulnerabilities in the future.
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- GitHub_M
- Date Reserved
- 2022-07-15T00:00:00.000Z
- Cisa Enriched
- true
Threat ID: 682d9845c4522896dcbf40ea
Added to database: 5/21/2025, 9:09:25 AM
Last enriched: 6/22/2025, 7:50:15 PM
Last updated: 8/18/2025, 11:32:35 PM
Views: 15
Related Threats
CVE-2025-8364: Address bar spoofing using an blob URI on Firefox for Android in Mozilla Firefox
UnknownCVE-2025-8042: Sandboxed iframe could start downloads in Mozilla Firefox
UnknownCVE-2025-8041: Incorrect URL truncation in Firefox for Android in Mozilla Firefox
UnknownCVE-2025-55033: Drag and drop gestures in Focus for iOS could allow JavaScript links to be executed incorrectly in Mozilla Focus for iOS
UnknownCVE-2025-55032: Focus incorrectly ignores Content-Disposition headers for some MIME types in Mozilla Focus for iOS
UnknownActions
Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.
External Links
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.