CVE-2025-46153: n/a
PyTorch before 3.7.0 has a bernoulli_p decompose function in decompositions.py even though it lacks full consistency with the eager CPU implementation, negatively affecting nn.Dropout1d, nn.Dropout2d, and nn.Dropout3d for fallback_random=True.
AI Analysis
Technical Summary
CVE-2025-46153 identifies a vulnerability in PyTorch versions prior to 3.7.0 related to the bernoulli_p decompose function located in decompositions.py. This function is used in the implementation of dropout layers nn.Dropout1d, nn.Dropout2d, and nn.Dropout3d when the fallback_random parameter is set to true. The issue arises because the bernoulli_p decompose function lacks full consistency with the eager CPU implementation of these dropout layers. Dropout is a regularization technique commonly used in neural networks to prevent overfitting by randomly zeroing out elements of the input tensor during training. The inconsistency means that the behavior of dropout layers may differ between the fallback_random mode and the eager CPU mode, potentially leading to unexpected or incorrect model behavior during training or inference. Although this is not a direct code execution or memory corruption vulnerability, it represents a correctness flaw in a widely used machine learning framework. This can affect the reliability and reproducibility of models that rely on dropout layers with fallback_random enabled. Since PyTorch is a foundational library for AI and machine learning applications, this inconsistency could propagate errors into downstream applications that depend on model accuracy and stability. There are no known exploits in the wild, and no CVSS score has been assigned yet. The vulnerability does not appear to allow unauthorized access or compromise system integrity directly but impacts the integrity of model computations. No patch links are currently provided, but upgrading to PyTorch 3.7.0 or later is implied to resolve the issue.
Potential Impact
For European organizations, the impact of this vulnerability depends largely on their reliance on PyTorch for AI and machine learning workloads, especially those involving dropout layers with fallback_random enabled. Industries such as automotive (autonomous driving), healthcare (medical imaging and diagnostics), finance (algorithmic trading and risk modeling), and research institutions heavily using AI could experience degraded model performance or inconsistent results. This may lead to incorrect decision-making, reduced trust in AI systems, and potential financial or reputational damage. Since the flaw affects model integrity rather than system security directly, the risk is more about data and model correctness than data breaches or service outages. However, in critical applications where AI decisions have safety or compliance implications, such inconsistencies could have serious consequences. European organizations that deploy AI models in production environments without thorough validation might unknowingly propagate errors, affecting downstream services or regulatory compliance. The lack of a known exploit reduces immediate risk, but the subtlety of the issue means it could go unnoticed, making detection and mitigation important.
Mitigation Recommendations
To mitigate this vulnerability, European organizations should: 1) Upgrade PyTorch installations to version 3.7.0 or later where the inconsistency is resolved. 2) Audit existing AI models that use nn.Dropout1d, nn.Dropout2d, or nn.Dropout3d with fallback_random enabled to verify if they are affected. 3) Conduct rigorous testing and validation of model outputs before and after the upgrade to ensure consistency and correctness. 4) Avoid using fallback_random=true in dropout layers unless necessary, or explicitly validate its behavior in the current environment. 5) Implement continuous integration pipelines that include model correctness checks to detect anomalies caused by framework inconsistencies. 6) Stay updated with PyTorch security advisories and community discussions for any emerging patches or workarounds. 7) For critical AI deployments, consider fallback strategies or redundancy in model inference to detect and mitigate unexpected behavior. These steps go beyond generic patching advice by focusing on model validation and operational controls specific to AI workflows.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Switzerland, Italy
CVE-2025-46153: n/a
Description
PyTorch before 3.7.0 has a bernoulli_p decompose function in decompositions.py even though it lacks full consistency with the eager CPU implementation, negatively affecting nn.Dropout1d, nn.Dropout2d, and nn.Dropout3d for fallback_random=True.
AI-Powered Analysis
Technical Analysis
CVE-2025-46153 identifies a vulnerability in PyTorch versions prior to 3.7.0 related to the bernoulli_p decompose function located in decompositions.py. This function is used in the implementation of dropout layers nn.Dropout1d, nn.Dropout2d, and nn.Dropout3d when the fallback_random parameter is set to true. The issue arises because the bernoulli_p decompose function lacks full consistency with the eager CPU implementation of these dropout layers. Dropout is a regularization technique commonly used in neural networks to prevent overfitting by randomly zeroing out elements of the input tensor during training. The inconsistency means that the behavior of dropout layers may differ between the fallback_random mode and the eager CPU mode, potentially leading to unexpected or incorrect model behavior during training or inference. Although this is not a direct code execution or memory corruption vulnerability, it represents a correctness flaw in a widely used machine learning framework. This can affect the reliability and reproducibility of models that rely on dropout layers with fallback_random enabled. Since PyTorch is a foundational library for AI and machine learning applications, this inconsistency could propagate errors into downstream applications that depend on model accuracy and stability. There are no known exploits in the wild, and no CVSS score has been assigned yet. The vulnerability does not appear to allow unauthorized access or compromise system integrity directly but impacts the integrity of model computations. No patch links are currently provided, but upgrading to PyTorch 3.7.0 or later is implied to resolve the issue.
Potential Impact
For European organizations, the impact of this vulnerability depends largely on their reliance on PyTorch for AI and machine learning workloads, especially those involving dropout layers with fallback_random enabled. Industries such as automotive (autonomous driving), healthcare (medical imaging and diagnostics), finance (algorithmic trading and risk modeling), and research institutions heavily using AI could experience degraded model performance or inconsistent results. This may lead to incorrect decision-making, reduced trust in AI systems, and potential financial or reputational damage. Since the flaw affects model integrity rather than system security directly, the risk is more about data and model correctness than data breaches or service outages. However, in critical applications where AI decisions have safety or compliance implications, such inconsistencies could have serious consequences. European organizations that deploy AI models in production environments without thorough validation might unknowingly propagate errors, affecting downstream services or regulatory compliance. The lack of a known exploit reduces immediate risk, but the subtlety of the issue means it could go unnoticed, making detection and mitigation important.
Mitigation Recommendations
To mitigate this vulnerability, European organizations should: 1) Upgrade PyTorch installations to version 3.7.0 or later where the inconsistency is resolved. 2) Audit existing AI models that use nn.Dropout1d, nn.Dropout2d, or nn.Dropout3d with fallback_random enabled to verify if they are affected. 3) Conduct rigorous testing and validation of model outputs before and after the upgrade to ensure consistency and correctness. 4) Avoid using fallback_random=true in dropout layers unless necessary, or explicitly validate its behavior in the current environment. 5) Implement continuous integration pipelines that include model correctness checks to detect anomalies caused by framework inconsistencies. 6) Stay updated with PyTorch security advisories and community discussions for any emerging patches or workarounds. 7) For critical AI deployments, consider fallback strategies or redundancy in model inference to detect and mitigate unexpected behavior. These steps go beyond generic patching advice by focusing on model validation and operational controls specific to AI workflows.
Affected Countries
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- mitre
- Date Reserved
- 2025-04-22T00:00:00.000Z
- Cvss Version
- null
- State
- PUBLISHED
Threat ID: 68d5511823f14e593ee333a8
Added to database: 9/25/2025, 2:26:32 PM
Last enriched: 9/25/2025, 2:27:24 PM
Last updated: 10/6/2025, 7:45:05 AM
Views: 32
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
CVE-2025-11326: Stack-based Buffer Overflow in Tenda AC18
HighCVE-2025-58591: CWE-22 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') in SICK AG Baggage Analytics
MediumCVE-2025-58590: CWE-22 Improper Limitation of a Pathname to a Restricted Directory ('Path Traversal') in SICK AG Baggage Analytics
MediumCVE-2025-58589: CWE-200 Exposure of Sensitive Information to an Unauthorized Actor in SICK AG Baggage Analytics
LowCVE-2025-58587: CWE-307 Improper Restriction of Excessive Authentication Attempts in SICK AG Baggage Analytics
MediumActions
Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.