Skip to main content

CVE-2022-41883: CWE-125: Out-of-bounds Read in tensorflow tensorflow

Medium
Published: Fri Nov 18 2022 (11/18/2022, 00:00:00 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. When ops that have specified input sizes receive a differing number of inputs, the executor will crash. We have patched the issue in GitHub commit f5381e0e10b5a61344109c1b7c174c68110f7629. The fix will be included in TensorFlow 2.11. We will also cherrypick this commit on TensorFlow 2.10.1, 2.9.3, and TensorFlow 2.8.4, as these are also affected and still in supported range.

AI-Powered Analysis

AILast updated: 06/21/2025, 21:23:01 UTC

Technical Analysis

CVE-2022-41883 is a medium-severity vulnerability classified as CWE-125 (Out-of-bounds Read) affecting TensorFlow, an open-source machine learning platform widely used for developing and deploying ML models. The issue arises when TensorFlow operations (ops) that expect a specific number of inputs receive a differing number of inputs. This mismatch causes the TensorFlow executor to perform an out-of-bounds read, leading to a crash of the executor process. The vulnerability affects TensorFlow versions starting from 2.10.0 up to but not including 2.10.1, as well as earlier supported versions 2.9.3 and 2.8.4, which have been patched via cherry-picked commits. The root cause is improper input validation in the executor component, which does not correctly handle the discrepancy between expected and actual input sizes for certain ops. While this vulnerability does not appear to allow arbitrary code execution or direct data leakage, the out-of-bounds read can cause denial of service (DoS) by crashing the TensorFlow process, potentially disrupting ML workflows or services relying on TensorFlow. No known exploits are reported in the wild, and the fix has been integrated into TensorFlow 2.11 and backported to affected supported versions. The vulnerability requires no authentication or user interaction beyond supplying malformed inputs to TensorFlow operations, which could be triggered by an attacker with the ability to influence input data to TensorFlow models or pipelines.

Potential Impact

For European organizations leveraging TensorFlow in production environments—such as research institutions, AI-driven enterprises, and cloud service providers—the primary impact is the risk of denial of service due to executor crashes. This can disrupt critical machine learning workloads, delay data processing, and degrade service availability. Organizations deploying TensorFlow in multi-tenant or cloud environments may face increased risk if attackers can supply crafted inputs remotely, potentially causing service outages. While the vulnerability does not directly compromise confidentiality or integrity, the availability impact can affect business continuity, especially in sectors relying heavily on AI for decision-making, automation, or customer-facing applications. Additionally, organizations with automated ML pipelines may experience cascading failures or require manual intervention to recover from crashes. Given the widespread adoption of TensorFlow across industries in Europe, the disruption potential is significant, particularly for sectors such as finance, healthcare, automotive, and telecommunications, where AI workloads are integral.

Mitigation Recommendations

To mitigate this vulnerability, European organizations should promptly upgrade TensorFlow to version 2.11 or apply the relevant patches to supported versions 2.10.1, 2.9.3, or 2.8.4. It is critical to audit ML pipelines and applications to identify any components that accept external or untrusted input data feeding into TensorFlow ops, implementing input validation and sanitization to prevent malformed inputs that could trigger the out-of-bounds read. Organizations should also implement robust monitoring and alerting on TensorFlow process health to detect crashes early and enable rapid recovery. Employing containerization or sandboxing for TensorFlow workloads can limit the blast radius of crashes. For cloud deployments, leveraging managed ML services with vendor-provided patches can reduce exposure. Finally, conducting thorough regression testing after patching ensures that ML models and workflows continue to function correctly without unintended side effects.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-09-30T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9849c4522896dcbf6c96

Added to database: 5/21/2025, 9:09:29 AM

Last enriched: 6/21/2025, 9:23:01 PM

Last updated: 8/18/2025, 10:26:50 PM

Views: 22

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats