Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2022-46741: CWE-125 Out-of-bounds Read in PaddlePaddle PaddlePaddle

0
Medium
Published: Wed Dec 07 2022 (12/07/2022, 07:41:04 UTC)
Source: CVE
Vendor/Project: PaddlePaddle
Product: PaddlePaddle

Description

Out-of-bounds read in gather_tree in PaddlePaddle before 2.4. 

AI-Powered Analysis

AILast updated: 06/22/2025, 06:08:10 UTC

Technical Analysis

CVE-2022-46741 is a medium-severity vulnerability classified as an out-of-bounds (OOB) read (CWE-125) affecting PaddlePaddle, an open-source deep learning platform developed by Baidu. The vulnerability exists in the gather_tree function in versions of PaddlePaddle prior to 2.4. An out-of-bounds read occurs when a program reads data outside the boundaries of allocated memory, which can lead to information disclosure, application crashes, or undefined behavior. In this case, the gather_tree function improperly accesses memory locations beyond the intended buffer limits. Although no known exploits are currently reported in the wild, the flaw could potentially be leveraged by an attacker to read sensitive memory contents or cause denial of service by crashing the application. PaddlePaddle is widely used for machine learning and AI workloads, including in research, industrial applications, and cloud services. The vulnerability does not require authentication or user interaction to be triggered if the attacker can supply crafted input to the affected function, which may be part of model inference or training pipelines. The lack of a patch link suggests that remediation might require upgrading to version 2.4 or later where the issue is fixed. Given the nature of the vulnerability, it primarily impacts confidentiality and availability, with integrity less directly affected. The vulnerability is technical in nature and requires knowledge of the PaddlePaddle internals and the ability to influence input data to exploit.

Potential Impact

For European organizations leveraging PaddlePaddle in AI and machine learning workflows, this vulnerability could lead to unauthorized disclosure of sensitive data residing in memory, such as proprietary model parameters, training data, or intermediate computation results. This could undermine intellectual property protection and data privacy compliance, especially under GDPR regulations. Additionally, exploitation could cause application crashes or denial of service, disrupting critical AI services or automated decision-making systems. Sectors such as finance, healthcare, automotive, and manufacturing that increasingly rely on AI models may face operational risks and reputational damage. Since PaddlePaddle is used in cloud environments and edge devices, the attack surface includes both centralized data centers and distributed systems. The absence of known exploits reduces immediate risk, but the medium severity and potential impact on confidentiality and availability warrant proactive mitigation. Organizations using vulnerable versions should prioritize remediation to maintain trust and compliance.

Mitigation Recommendations

1. Upgrade PaddlePaddle to version 2.4 or later where the gather_tree out-of-bounds read vulnerability is addressed. 2. Conduct a thorough inventory of AI/ML workloads using PaddlePaddle and identify all instances running vulnerable versions. 3. Implement strict input validation and sanitization on data fed into PaddlePaddle models to reduce the risk of crafted inputs triggering the vulnerability. 4. Monitor application logs and system behavior for anomalies such as crashes or unexpected memory access errors that could indicate exploitation attempts. 5. Employ runtime application self-protection (RASP) or memory protection mechanisms (e.g., ASLR, DEP) to mitigate the impact of memory corruption vulnerabilities. 6. Restrict access to AI model inference and training interfaces to trusted users and systems to limit exposure. 7. Engage with PaddlePaddle community and security advisories for updates and patches. 8. Consider isolating AI workloads in containerized or sandboxed environments to contain potential exploitation effects.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
Baidu
Date Reserved
2022-12-07T05:44:14.697Z
Cisa Enriched
true

Threat ID: 682d9848c4522896dcbf5d8c

Added to database: 5/21/2025, 9:09:28 AM

Last enriched: 6/22/2025, 6:08:10 AM

Last updated: 10/15/2025, 1:45:35 AM

Views: 21

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats