Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2022-29208: CWE-787: Out-of-bounds Write in tensorflow tensorflow

0
Medium
Published: Fri May 20 2022 (05/20/2022, 22:30:13 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. Prior to versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the implementation of `tf.raw_ops.EditDistance` has incomplete validation. Users can pass negative values to cause a segmentation fault based denial of service. In multiple places throughout the code, one may compute an index for a write operation. However, the existing validation only checks against the upper bound of the array. Hence, it is possible to write before the array by massaging the input to generate negative values for `loc`. Versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4 contain a patch for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 01:19:46 UTC

Technical Analysis

CVE-2022-29208 is a medium severity vulnerability in TensorFlow, an open-source machine learning platform widely used for developing and deploying machine learning models. The vulnerability arises from an out-of-bounds write condition (CWE-787) in the implementation of the `tf.raw_ops.EditDistance` operation. Specifically, prior to patched versions 2.9.0, 2.8.1, 2.7.2, and 2.6.4, the function does not properly validate input indices, allowing negative values to bypass the upper bound checks. This improper validation can cause the code to compute negative indices for write operations, resulting in writes before the start of an array buffer. Such out-of-bounds writes can lead to memory corruption and cause a segmentation fault, which manifests as a denial of service (DoS) by crashing the process using TensorFlow. The vulnerability requires crafted input data that triggers negative index calculations but does not require authentication or user interaction beyond supplying the malicious input to the affected TensorFlow API. No known exploits in the wild have been reported to date. The issue affects multiple TensorFlow versions prior to the specified patched releases, and the fix involves proper validation of input indices to prevent negative values from causing out-of-bounds writes.

Potential Impact

For European organizations leveraging TensorFlow in their machine learning pipelines, this vulnerability primarily poses a risk of denial of service. An attacker capable of supplying malicious input to affected TensorFlow services or applications can cause crashes, disrupting machine learning workflows, data processing, or AI-driven services. This can impact availability and operational continuity, especially in environments where TensorFlow is integrated into critical systems such as financial analytics, healthcare diagnostics, or industrial automation. While the vulnerability does not directly enable code execution or data leakage, the induced crashes could be exploited to degrade service reliability or trigger cascading failures in dependent systems. Organizations relying on TensorFlow for real-time or production workloads may experience downtime or require emergency patching, which can incur operational costs. The risk is heightened in multi-tenant or cloud environments where untrusted users might supply input to shared TensorFlow instances. However, the lack of known exploits and the requirement to supply specific malformed inputs somewhat limit the immediate threat level.

Mitigation Recommendations

European organizations should prioritize upgrading TensorFlow installations to the patched versions 2.9.0, 2.8.1, 2.7.2, or 2.6.4 or later to remediate this vulnerability. Beyond upgrading, organizations should implement input validation and sanitization at the application layer to prevent malformed or malicious inputs from reaching TensorFlow APIs, especially `tf.raw_ops.EditDistance`. Employing runtime monitoring and anomaly detection to identify unusual crashes or segmentation faults in TensorFlow processes can help detect exploitation attempts early. For environments exposing TensorFlow services to external or untrusted users, consider isolating these services in sandboxed containers or virtual machines to limit the impact of potential crashes. Additionally, applying strict access controls and network segmentation to restrict who can submit data to TensorFlow-powered services reduces attack surface. Regularly auditing and updating machine learning dependencies as part of the software supply chain security practices will help prevent similar vulnerabilities. Finally, maintaining robust backup and recovery procedures ensures rapid restoration of services in case of disruption.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-04-13T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9848c4522896dcbf654e

Added to database: 5/21/2025, 9:09:28 AM

Last enriched: 6/22/2025, 1:19:46 AM

Last updated: 2/7/2026, 11:10:25 AM

Views: 36

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats