Skip to main content

CVE-2022-23560: CWE-125: Out-of-bounds Read in tensorflow tensorflow

Medium
Published: Fri Feb 04 2022 (02/04/2022, 22:32:36 UTC)
Source: CVE
Vendor/Project: tensorflow
Product: tensorflow

Description

Tensorflow is an Open Source Machine Learning Framework. An attacker can craft a TFLite model that would allow limited reads and writes outside of arrays in TFLite. This exploits missing validation in the conversion from sparse tensors to dense tensors. The fix is included in TensorFlow 2.8.0. We will also cherrypick this commit on TensorFlow 2.7.1, TensorFlow 2.6.3, and TensorFlow 2.5.3, as these are also affected and still in supported range. Users are advised to upgrade as soon as possible.

AI-Powered Analysis

AILast updated: 06/22/2025, 03:21:13 UTC

Technical Analysis

CVE-2022-23560 is a medium-severity vulnerability identified in TensorFlow, an open-source machine learning framework widely used for developing and deploying machine learning models. The vulnerability arises from improper validation during the conversion process from sparse tensors to dense tensors within the TensorFlow Lite (TFLite) component. Specifically, an attacker can craft a malicious TFLite model that triggers out-of-bounds memory access, allowing limited reads and writes beyond the intended array boundaries. This is categorized under CWE-125 (Out-of-bounds Read) and CWE-787 (Out-of-bounds Write), indicating that the vulnerability can lead to unauthorized memory access and potential memory corruption. The affected versions include TensorFlow releases prior to 2.5.3, versions between 2.6.0 and 2.6.3, and versions between 2.7.0 and 2.7.1. The issue was addressed in TensorFlow 2.8.0, with backported fixes for supported earlier versions. The exploitation requires an attacker to supply a specially crafted TFLite model to the vulnerable TensorFlow environment, which may be used in scenarios where untrusted or third-party models are loaded and executed. There are no known exploits in the wild as of the publication date, but the vulnerability poses a risk due to the potential for memory corruption, which could lead to denial of service or, in some cases, arbitrary code execution depending on the deployment context and environment protections.

Potential Impact

For European organizations, the impact of this vulnerability depends largely on the extent to which TensorFlow, particularly TensorFlow Lite, is integrated into their machine learning workflows and production environments. Organizations involved in AI research, development of ML-powered applications, or deployment of ML models on edge devices and mobile platforms are at higher risk. The out-of-bounds read and write could lead to application crashes, denial of service, or potentially allow attackers to manipulate the execution flow if combined with other vulnerabilities, thereby compromising confidentiality, integrity, and availability of ML systems. This could affect sectors such as finance, healthcare, automotive, and critical infrastructure where ML models are increasingly used for decision-making and automation. Additionally, since TFLite is often used in resource-constrained environments (e.g., IoT devices), exploitation could lead to device malfunction or compromise, impacting operational technology. The absence of known exploits reduces immediate risk but does not eliminate the potential for future attacks, especially as adversaries may target ML supply chains or model deployment pipelines.

Mitigation Recommendations

European organizations should prioritize upgrading TensorFlow installations to version 2.8.0 or later, or apply the backported patches available for supported earlier versions (2.5.3, 2.6.3, 2.7.1). It is critical to implement strict validation and provenance checks on all TFLite models before deployment, ensuring that only trusted and verified models are executed. Organizations should also consider sandboxing or isolating ML model execution environments to limit the impact of potential memory corruption. Monitoring and logging of ML model loading and execution can help detect anomalous behavior indicative of exploitation attempts. For environments where upgrading is not immediately feasible, applying runtime protections such as memory safety tools (e.g., AddressSanitizer) during development and testing phases can help identify exploitation attempts. Finally, organizations should review their ML supply chain security practices to prevent introduction of malicious models and maintain an inventory of TensorFlow versions in use across all teams and projects.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2022-01-19T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d9848c4522896dcbf6268

Added to database: 5/21/2025, 9:09:28 AM

Last enriched: 6/22/2025, 3:21:13 AM

Last updated: 7/26/2025, 3:30:14 PM

Views: 10

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats