Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2022-35986: CWE-20: Improper Input Validation in tensorflow tensorflow

0
Medium
Published: Fri Sep 16 2022 (09/16/2022, 21:45:13 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. If `RaggedBincount` is given an empty input tensor `splits`, it results in a segfault that can be used to trigger a denial of service attack. We have patched the issue in GitHub commit 7a4591fd4f065f4fa903593bc39b2f79530a74b8. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 19:50:00 UTC

Technical Analysis

CVE-2022-35986 is a medium-severity vulnerability affecting multiple versions of TensorFlow, an open-source machine learning platform widely used in research, development, and production environments. The vulnerability arises from improper input validation (CWE-20) in the TensorFlow operation `RaggedBincount`. Specifically, when the `splits` input tensor is empty, the function triggers a segmentation fault (segfault), causing the TensorFlow process to crash. This behavior can be exploited to cause a denial of service (DoS) condition by crashing applications or services that rely on TensorFlow for machine learning tasks. The issue affects TensorFlow versions prior to 2.7.2, versions 2.8.0 up to but not including 2.8.1, and versions 2.9.0 up to but not including 2.9.1. The vulnerability was patched in commit 7a4591fd4f065f4fa903593bc39b2f79530a74b8, with fixes backported to supported versions 2.7.2, 2.8.1, and 2.9.1. There are no known workarounds, and no exploits have been observed in the wild to date. The vulnerability requires an attacker to supply a crafted input tensor to the vulnerable function, which may require some level of access to the machine learning pipeline or API endpoints that accept user input for TensorFlow processing. The impact is limited to denial of service through process crashes, with no direct evidence of code execution or data corruption. However, the disruption of machine learning services can have significant operational consequences, especially in environments where TensorFlow is integrated into critical workflows.

Potential Impact

For European organizations, the primary impact of CVE-2022-35986 is the potential disruption of machine learning services that rely on vulnerable TensorFlow versions. This can affect sectors such as finance, healthcare, automotive, manufacturing, and research institutions where TensorFlow is used for predictive analytics, automated decision-making, or AI-driven applications. A denial of service caused by this vulnerability could lead to downtime, loss of productivity, delayed processing of critical data, and potential cascading effects on dependent systems. While the vulnerability does not directly compromise confidentiality or integrity, the availability impact can be significant in time-sensitive or safety-critical applications. Organizations using TensorFlow in exposed environments—such as public-facing APIs, cloud-based ML services, or shared platforms—are at higher risk. The lack of known exploits reduces immediate threat levels, but the ease of triggering a crash with crafted input means that attackers with access to input channels could disrupt services. Given the increasing reliance on AI and ML in European digital infrastructure, this vulnerability underscores the importance of timely patching and input validation controls.

Mitigation Recommendations

1. Upgrade TensorFlow to the latest patched versions: 2.7.2, 2.8.1, 2.9.1, or later, as these contain the fix for this vulnerability. 2. Implement strict input validation and sanitization at the application layer before passing data to TensorFlow operations, especially for user-supplied or external inputs that may affect the `RaggedBincount` function. 3. Employ runtime monitoring and anomaly detection to identify unexpected crashes or service disruptions related to TensorFlow processes. 4. Isolate TensorFlow workloads in containerized or sandboxed environments to limit the blast radius of potential crashes. 5. For organizations exposing ML inference APIs, enforce authentication and authorization to restrict access to trusted users and systems, reducing the risk of malicious input triggering the DoS. 6. Maintain robust logging and alerting mechanisms to detect repeated or suspicious input patterns that could indicate exploitation attempts. 7. Coordinate patch management with development and operations teams to ensure rapid deployment of fixes across all affected environments. 8. Consider fallback or redundancy mechanisms for critical ML services to maintain availability during patching or incident response.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf40ee

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 7:50:00 PM

Last updated: 2/7/2026, 2:27:47 PM

Views: 48

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats