Skip to main content

CVE-2022-35973: CWE-20: Improper Input Validation in tensorflow tensorflow

Medium
Published: Fri Sep 16 2022 (09/16/2022, 21:00:14 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. If `QuantizedMatMul` is given nonscalar input for: `min_a`, `max_a`, `min_b`, or `max_b` It gives a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit aca766ac7693bf29ed0df55ad6bfcc78f35e7f48. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 20:04:43 UTC

Technical Analysis

CVE-2022-35973 is a medium-severity vulnerability in TensorFlow, an open-source machine learning platform widely used for developing and deploying machine learning models. The vulnerability arises from improper input validation (CWE-20) in the QuantizedMatMul operation. Specifically, if the parameters `min_a`, `max_a`, `min_b`, or `max_b` are provided with nonscalar inputs instead of the expected scalar values, the TensorFlow process experiences a segmentation fault (segfault). This segfault can be exploited to trigger a denial of service (DoS) attack, causing the affected application or service to crash or become unresponsive. The issue affects multiple TensorFlow versions: all versions prior to 2.7.2, versions 2.8.0 up to but not including 2.8.1, and versions 2.9.0 up to but not including 2.9.1. The vulnerability has been patched in TensorFlow 2.10.0 and backported to 2.9.1, 2.8.1, and 2.7.2. No known exploits have been reported in the wild, and no workarounds exist other than applying the official patches. The root cause is the lack of proper validation on input parameters to the QuantizedMatMul function, which is critical in quantized matrix multiplication operations often used in optimized machine learning inference pipelines. Exploiting this vulnerability requires an attacker to supply crafted inputs to the affected TensorFlow API or service, which may be feasible in scenarios where TensorFlow models are exposed via APIs or integrated into larger applications that accept external input. However, no authentication or user interaction is explicitly required for exploitation if the attacker can control the input to the vulnerable function.

Potential Impact

For European organizations, the primary impact of this vulnerability is the risk of denial of service attacks against machine learning services or applications that utilize vulnerable TensorFlow versions. This can lead to service downtime, disruption of critical AI-driven processes, and potential loss of availability in systems relying on TensorFlow for inference or training. Industries heavily dependent on AI, such as finance, healthcare, automotive, and manufacturing, could experience operational interruptions. While the vulnerability does not directly compromise confidentiality or integrity, the resulting service outages could indirectly affect business continuity and reliability. Additionally, organizations providing AI-as-a-service or cloud-based machine learning platforms may face reputational damage and customer trust issues if their services are disrupted. Since no known exploits exist in the wild, the immediate risk is moderate, but the widespread use of TensorFlow in European research institutions, enterprises, and technology providers means that unpatched systems remain vulnerable to targeted DoS attacks. The absence of workarounds increases the urgency for patching to maintain service availability.

Mitigation Recommendations

European organizations should prioritize updating TensorFlow installations to the patched versions: 2.7.2, 2.8.1, 2.9.1, or 2.10.0 and later. Given the lack of workarounds, patch management is the most effective mitigation. Organizations should: 1) Inventory all systems and applications using TensorFlow to identify affected versions. 2) Test and deploy the patched TensorFlow versions in development and production environments promptly. 3) Implement input validation at the application layer to ensure that parameters passed to TensorFlow APIs conform to expected scalar types, adding an additional defensive layer. 4) Monitor logs and application behavior for unexpected crashes or segfaults that may indicate exploitation attempts. 5) Restrict access to machine learning model endpoints to trusted users and networks to reduce the attack surface. 6) Employ rate limiting and anomaly detection on APIs exposing TensorFlow functionality to detect and mitigate potential DoS attempts. 7) Engage with software vendors or third-party providers to confirm that their TensorFlow dependencies are updated. These steps go beyond generic advice by emphasizing input validation at the application level and proactive monitoring to detect exploitation attempts.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf4088

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 8:04:43 PM

Last updated: 8/5/2025, 12:33:59 AM

Views: 13

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats