Skip to main content
DashboardThreatsMapFeedsAPI
reconnecting
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-46153: n/a

0
Medium
VulnerabilityCVE-2025-46153cvecve-2025-46153
Published: Thu Sep 25 2025 (09/25/2025, 00:00:00 UTC)
Source: CVE Database V5

Description

PyTorch before 3.7.0 has a bernoulli_p decompose function in decompositions.py even though it lacks full consistency with the eager CPU implementation, negatively affecting nn.Dropout1d, nn.Dropout2d, and nn.Dropout3d for fallback_random=True.

AI-Powered Analysis

AILast updated: 09/25/2025, 14:27:24 UTC

Technical Analysis

CVE-2025-46153 identifies a vulnerability in PyTorch versions prior to 3.7.0 related to the bernoulli_p decompose function located in decompositions.py. This function is used in the implementation of dropout layers nn.Dropout1d, nn.Dropout2d, and nn.Dropout3d when the fallback_random parameter is set to true. The issue arises because the bernoulli_p decompose function lacks full consistency with the eager CPU implementation of these dropout layers. Dropout is a regularization technique commonly used in neural networks to prevent overfitting by randomly zeroing out elements of the input tensor during training. The inconsistency means that the behavior of dropout layers may differ between the fallback_random mode and the eager CPU mode, potentially leading to unexpected or incorrect model behavior during training or inference. Although this is not a direct code execution or memory corruption vulnerability, it represents a correctness flaw in a widely used machine learning framework. This can affect the reliability and reproducibility of models that rely on dropout layers with fallback_random enabled. Since PyTorch is a foundational library for AI and machine learning applications, this inconsistency could propagate errors into downstream applications that depend on model accuracy and stability. There are no known exploits in the wild, and no CVSS score has been assigned yet. The vulnerability does not appear to allow unauthorized access or compromise system integrity directly but impacts the integrity of model computations. No patch links are currently provided, but upgrading to PyTorch 3.7.0 or later is implied to resolve the issue.

Potential Impact

For European organizations, the impact of this vulnerability depends largely on their reliance on PyTorch for AI and machine learning workloads, especially those involving dropout layers with fallback_random enabled. Industries such as automotive (autonomous driving), healthcare (medical imaging and diagnostics), finance (algorithmic trading and risk modeling), and research institutions heavily using AI could experience degraded model performance or inconsistent results. This may lead to incorrect decision-making, reduced trust in AI systems, and potential financial or reputational damage. Since the flaw affects model integrity rather than system security directly, the risk is more about data and model correctness than data breaches or service outages. However, in critical applications where AI decisions have safety or compliance implications, such inconsistencies could have serious consequences. European organizations that deploy AI models in production environments without thorough validation might unknowingly propagate errors, affecting downstream services or regulatory compliance. The lack of a known exploit reduces immediate risk, but the subtlety of the issue means it could go unnoticed, making detection and mitigation important.

Mitigation Recommendations

To mitigate this vulnerability, European organizations should: 1) Upgrade PyTorch installations to version 3.7.0 or later where the inconsistency is resolved. 2) Audit existing AI models that use nn.Dropout1d, nn.Dropout2d, or nn.Dropout3d with fallback_random enabled to verify if they are affected. 3) Conduct rigorous testing and validation of model outputs before and after the upgrade to ensure consistency and correctness. 4) Avoid using fallback_random=true in dropout layers unless necessary, or explicitly validate its behavior in the current environment. 5) Implement continuous integration pipelines that include model correctness checks to detect anomalies caused by framework inconsistencies. 6) Stay updated with PyTorch security advisories and community discussions for any emerging patches or workarounds. 7) For critical AI deployments, consider fallback strategies or redundancy in model inference to detect and mitigate unexpected behavior. These steps go beyond generic patching advice by focusing on model validation and operational controls specific to AI workflows.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
mitre
Date Reserved
2025-04-22T00:00:00.000Z
Cvss Version
null
State
PUBLISHED

Threat ID: 68d5511823f14e593ee333a8

Added to database: 9/25/2025, 2:26:32 PM

Last enriched: 9/25/2025, 2:27:24 PM

Last updated: 10/6/2025, 7:45:05 AM

Views: 32

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats