Skip to main content
DashboardThreatsMapFeedsAPI
reconnecting
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-46150: n/a

0
Medium
VulnerabilityCVE-2025-46150cvecve-2025-46150
Published: Thu Sep 25 2025 (09/25/2025, 00:00:00 UTC)
Source: CVE Database V5

Description

In PyTorch before 2.7.0, when torch.compile is used, FractionalMaxPool2d has inconsistent results.

AI-Powered Analysis

AILast updated: 09/25/2025, 14:27:57 UTC

Technical Analysis

CVE-2025-46150 is a vulnerability identified in the PyTorch machine learning framework versions prior to 2.7.0. The issue arises specifically when the torch.compile feature is used in conjunction with the FractionalMaxPool2d operation. FractionalMaxPool2d is a pooling layer variant used in convolutional neural networks to reduce spatial dimensions while preserving important features. The vulnerability manifests as inconsistent results during the execution of FractionalMaxPool2d when compiled with torch.compile, which is a feature designed to optimize PyTorch models for faster execution. These inconsistencies could lead to unpredictable behavior in machine learning models, potentially affecting the accuracy and reliability of AI-driven applications. Although no direct exploit code or active exploitation has been reported, the inconsistency in output could be leveraged by attackers to cause denial of service through model malfunction or to subtly manipulate model outputs, impacting integrity. The lack of a CVSS score and absence of patch links indicate that this vulnerability is newly published and may not yet have an official fix or detailed exploit analysis. The root cause likely stems from how torch.compile optimizes or transforms the FractionalMaxPool2d operation, leading to non-deterministic or incorrect pooling results. This could affect any application relying on PyTorch for critical AI workloads, especially those requiring high precision and consistency in model inference or training.

Potential Impact

For European organizations, the impact of this vulnerability can be significant, particularly for sectors heavily reliant on AI and machine learning, such as finance, healthcare, automotive, and telecommunications. Inconsistent results in AI models can lead to erroneous decision-making, reduced trust in automated systems, and potential compliance issues with regulations like GDPR if AI-driven decisions affect personal data processing. For example, healthcare applications using AI for diagnostics or treatment recommendations could produce unreliable outputs, risking patient safety. Financial institutions using AI for fraud detection or risk assessment might experience degraded model performance, leading to financial losses or regulatory scrutiny. Additionally, organizations deploying AI models in safety-critical systems, such as autonomous vehicles or industrial automation, could face operational disruptions or safety hazards. Although no active exploits are known, the vulnerability's presence in a widely used AI framework means that attackers could potentially develop attacks that exploit model inconsistencies to cause denial of service or subtle data manipulation, impacting confidentiality and integrity of AI-driven processes.

Mitigation Recommendations

To mitigate this vulnerability, European organizations should first verify if they are using PyTorch versions prior to 2.7.0 and whether their AI workloads utilize torch.compile in combination with FractionalMaxPool2d. Immediate steps include: 1) Avoid using torch.compile with FractionalMaxPool2d until an official patch or update is released. 2) Conduct thorough testing of AI models for consistency and correctness if torch.compile must be used, especially focusing on pooling layers. 3) Monitor PyTorch official channels for patches or updates addressing this issue and apply them promptly once available. 4) Implement robust validation and verification processes for AI model outputs to detect anomalies that may arise from this vulnerability. 5) Consider alternative pooling methods or refrain from using FractionalMaxPool2d if feasible. 6) Engage with AI and cybersecurity teams to assess the risk profile of affected AI applications and develop contingency plans. 7) Maintain strict access controls and monitoring around AI model deployment environments to detect any unusual activity that might indicate exploitation attempts.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
mitre
Date Reserved
2025-04-22T00:00:00.000Z
Cvss Version
null
State
PUBLISHED

Threat ID: 68d5511823f14e593ee3339d

Added to database: 9/25/2025, 2:26:32 PM

Last enriched: 9/25/2025, 2:27:57 PM

Last updated: 10/7/2025, 1:52:47 PM

Views: 21

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats