Skip to main content

CVE-2022-35937: CWE-125: Out-of-bounds Read in tensorflow tensorflow

Medium
Published: Fri Sep 16 2022 (09/16/2022, 19:40:20 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

TensorFlow is an open source platform for machine learning. The `GatherNd` function takes arguments that determine the sizes of inputs and outputs. If the inputs given are greater than or equal to the sizes of the outputs, an out-of-bounds memory read is triggered. This issue has been patched in GitHub commit 595a65a3e224a0362d7e68c2213acfc2b499a196. The fix will be included in TensorFlow 2.10.0. We will also cherrypick this commit on TensorFlow 2.9.1, TensorFlow 2.8.1, and TensorFlow 2.7.2, as these are also affected and still in supported range. There are no known workarounds for this issue.

AI-Powered Analysis

AILast updated: 06/22/2025, 20:21:13 UTC

Technical Analysis

CVE-2022-35937 is a medium-severity vulnerability identified in TensorFlow, an open-source machine learning platform widely used for developing and deploying machine learning models. The vulnerability arises from an out-of-bounds read condition in the `GatherNd` function. This function processes input arguments that specify the sizes of inputs and outputs. If the input indices provided to `GatherNd` are greater than or equal to the output sizes, the function attempts to read memory beyond the allocated bounds, causing an out-of-bounds memory read (CWE-125). Such a flaw can lead to information disclosure or potentially crash the application due to invalid memory access. The issue affects TensorFlow versions prior to 2.7.2, and specific minor versions in the 2.8.x and 2.9.x branches (>= 2.8.0, < 2.8.1 and >= 2.9.0, < 2.9.1). The vulnerability has been patched in TensorFlow 2.10.0 and backported to supported versions 2.7.2, 2.8.1, and 2.9.1. There are no known workarounds, and no exploits have been reported in the wild to date. The vulnerability requires that the attacker can supply crafted inputs to the `GatherNd` function, which is typically invoked within TensorFlow model code or pipelines. Exploitation does not require authentication but does require the ability to influence or control the input data fed into the vulnerable TensorFlow instance. The impact primarily concerns confidentiality and availability, as out-of-bounds reads can leak sensitive memory contents or cause application crashes. Integrity impact is limited as this is a read-only memory violation. The vulnerability is technical in nature and relevant to organizations using affected TensorFlow versions in their machine learning workflows or services.

Potential Impact

For European organizations, the impact of CVE-2022-35937 depends on the extent to which TensorFlow is integrated into their machine learning infrastructure. Organizations in sectors such as finance, healthcare, automotive, and telecommunications that rely on TensorFlow for AI model training or inference could face risks of sensitive data leakage or service disruption if the vulnerability is exploited. Out-of-bounds reads can expose internal memory contents, potentially leaking proprietary model data or user information processed by the models. Additionally, crashes caused by this vulnerability could lead to denial of service in critical AI-driven applications, impacting operational continuity. Although no active exploits are known, the widespread adoption of TensorFlow in European research institutions and enterprises means that unpatched systems could be targeted in the future. The lack of authentication requirements for exploitation increases the risk if attackers can supply malicious inputs, especially in multi-tenant or cloud environments where TensorFlow services are exposed to external data sources. The vulnerability does not directly allow code execution, limiting the severity compared to remote code execution flaws, but the confidentiality and availability risks remain significant for sensitive AI workloads.

Mitigation Recommendations

European organizations should prioritize upgrading TensorFlow to version 2.10.0 or later, or apply the backported patches available for versions 2.7.2, 2.8.1, and 2.9.1. Since no workarounds exist, patching is the primary mitigation strategy. Organizations should audit their machine learning pipelines to identify any use of the `GatherNd` function and assess whether untrusted inputs could reach this function. Implement input validation and sanitization controls upstream to prevent maliciously crafted indices from triggering out-of-bounds reads. In cloud or multi-tenant environments, restrict access to TensorFlow services and enforce strict input controls to minimize exposure. Monitoring for unusual application crashes or memory access errors in TensorFlow instances can help detect exploitation attempts. Additionally, organizations should review their deployment configurations to ensure that TensorFlow instances are not unnecessarily exposed to untrusted data sources. Incorporating runtime protections such as memory safety tools or sandboxing TensorFlow workloads can further reduce risk. Finally, maintain awareness of updates from TensorFlow and related security advisories to promptly apply future patches.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-07-15T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9845c4522896dcbf3fdf

Added to database: 5/21/2025, 9:09:25 AM

Last enriched: 6/22/2025, 8:21:13 PM

Last updated: 7/31/2025, 9:52:42 PM

Views: 8

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats