Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2024-31584: n/a

0
Medium
VulnerabilityCVE-2024-31584cvecve-2024-31584
Published: Fri Apr 19 2024 (04/19/2024, 00:00:00 UTC)
Source: CVE Database V5

Description

Pytorch before v2.2.0 has an Out-of-bounds Read vulnerability via the component torch/csrc/jit/mobile/flatbuffer_loader.cpp.

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 02/26/2026, 12:25:03 UTC

Technical Analysis

CVE-2024-31584 is an out-of-bounds read vulnerability identified in the PyTorch machine learning framework, specifically within the torch/csrc/jit/mobile/flatbuffer_loader.cpp source file. This vulnerability exists in versions prior to 2.2.0 and arises from improper bounds checking when loading flatbuffer data structures used in the JIT mobile runtime. An out-of-bounds read can lead to the application reading memory outside the intended buffer, which may result in information disclosure or cause the application to behave unpredictably. The vulnerability requires an attacker to have low privileges (PR:L) and user interaction (UI:R), and can be exploited remotely over a network (AV:N) with low attack complexity (AC:L). The scope remains unchanged (S:U), meaning the vulnerability affects only the vulnerable component without impacting other system components. The CVSS 3.1 base score is 5.5, reflecting medium severity with potential impacts on confidentiality, integrity, and availability, though these impacts are limited. No public exploits or active exploitation have been reported to date. The vulnerability is classified under CWE-125 (Out-of-bounds Read), a common memory safety issue. Since PyTorch is widely used in AI/ML development, especially for mobile and embedded deployments, this vulnerability could affect applications that load and execute JIT-compiled models on such devices. The lack of an available patch at the time of disclosure necessitates cautious handling of affected versions and monitoring for updates.

Potential Impact

The primary impact of CVE-2024-31584 is unauthorized reading of memory beyond allocated buffers, which can lead to partial disclosure of sensitive information residing in adjacent memory areas. This can compromise confidentiality by leaking data that should remain protected. Integrity impact is limited but possible if the out-of-bounds read leads to application instability or crashes, potentially disrupting normal operations. Availability could be affected if the vulnerability causes application crashes or denial of service conditions. Since exploitation requires low privileges and user interaction, attackers might leverage social engineering or trick users into triggering the vulnerability remotely. Organizations deploying PyTorch in production environments, especially those running JIT-compiled models on mobile or embedded platforms, could face risks of data leakage or service disruption. Although no known exploits exist currently, the widespread use of PyTorch in AI/ML workflows means that attackers may develop exploits once patches are released, increasing risk over time. Failure to address this vulnerability could undermine trust in AI applications and expose sensitive model or data information.

Mitigation Recommendations

1. Upgrade to PyTorch version 2.2.0 or later as soon as the patch becomes available to eliminate the vulnerability. 2. Until a patch is available, restrict network access to systems running vulnerable PyTorch versions, especially those exposing JIT mobile runtime components. 3. Implement strict input validation and sanitization on any data fed into the JIT mobile flatbuffer loader to minimize malformed input risks. 4. Monitor application logs and system behavior for unusual crashes or memory access errors that could indicate attempted exploitation. 5. Employ runtime protections such as memory-safe languages or sandboxing around PyTorch components to limit the impact of out-of-bounds reads. 6. Educate users and developers about the risk of social engineering attacks that might trigger user interaction required for exploitation. 7. Conduct regular security assessments of AI/ML deployment environments to identify and remediate similar memory safety issues proactively. 8. Use network segmentation to isolate AI/ML infrastructure from general user networks to reduce exposure.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Data Version
5.1
Assigner Short Name
mitre
Date Reserved
2024-04-05T00:00:00.000Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 699f6dd3b7ef31ef0b58eee7

Added to database: 2/25/2026, 9:46:59 PM

Last enriched: 2/26/2026, 12:25:03 PM

Last updated: 4/12/2026, 11:45:10 AM

Views: 15

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses