Skip to main content

CVE-2022-45907: n/a in n/a

Critical
VulnerabilityCVE-2022-45907cvecve-2022-45907n-acwe-94
Published: Sat Nov 26 2022 (11/26/2022, 00:00:00 UTC)
Source: CVE
Vendor/Project: n/a
Product: n/a

Description

In PyTorch before trunk/89695, torch.jit.annotations.parse_type_line can cause arbitrary code execution because eval is used unsafely.

AI-Powered Analysis

AILast updated: 06/22/2025, 05:22:35 UTC

Technical Analysis

CVE-2022-45907 is a critical vulnerability identified in the PyTorch machine learning framework, specifically in the function torch.jit.annotations.parse_type_line. This function uses Python's eval() function unsafely to parse type annotations, which can lead to arbitrary code execution. The vulnerability arises because eval() executes the input string as Python code without proper sanitization or validation, allowing an attacker to inject and execute malicious code remotely. The vulnerability affects versions of PyTorch prior to the trunk/89695 commit, though exact version numbers are not specified. The CVSS v3.1 base score is 9.8, indicating a critical severity with network attack vector (AV:N), low attack complexity (AC:L), no privileges required (PR:N), no user interaction (UI:N), unchanged scope (S:U), and high impacts on confidentiality (C:H), integrity (I:H), and availability (A:H). This means an attacker can exploit this vulnerability remotely without authentication or user interaction, leading to full system compromise. The underlying CWE is CWE-94 (Improper Control of Generation of Code), which is a common weakness in software that improperly handles dynamic code generation or evaluation. No known exploits are reported in the wild yet, but the critical nature and ease of exploitation make it a significant threat. The vulnerability is particularly relevant to organizations using PyTorch for AI/ML workloads, especially those that process untrusted input or expose JIT compilation features in networked environments. Since PyTorch is widely used in research, industry, and cloud environments, this vulnerability poses a risk to confidentiality, integrity, and availability of affected systems and data.

Potential Impact

For European organizations, the impact of CVE-2022-45907 can be substantial. Many European enterprises, research institutions, and technology companies rely on PyTorch for AI and machine learning applications, including data analytics, autonomous systems, and cloud services. Exploitation of this vulnerability could allow attackers to execute arbitrary code remotely, potentially leading to data breaches, intellectual property theft, disruption of AI services, and compromise of critical infrastructure. Given the high confidentiality, integrity, and availability impacts, attackers could manipulate AI models, inject malicious payloads, or disrupt automated decision-making processes. This could have cascading effects in sectors such as finance, healthcare, manufacturing, and public services, where AI-driven systems are increasingly integrated. Additionally, the lack of authentication and user interaction requirements lowers the barrier for attackers, increasing the likelihood of exploitation if vulnerable versions are deployed in exposed environments. The absence of known exploits currently provides a window for proactive mitigation, but the critical severity demands urgent attention to prevent potential attacks.

Mitigation Recommendations

1. Immediate upgrade to the latest PyTorch version that includes the fix for this vulnerability (post trunk/89695 commit) is the most effective mitigation. 2. If upgrading is not immediately possible, restrict access to systems running PyTorch JIT features to trusted networks only, minimizing exposure to untrusted inputs. 3. Implement strict input validation and sanitization for any user-supplied data processed by PyTorch, especially if it involves JIT compilation or type annotations parsing. 4. Employ runtime application self-protection (RASP) or endpoint detection and response (EDR) solutions to monitor and block suspicious code execution attempts. 5. Conduct thorough code audits and penetration testing focusing on dynamic code evaluation functions within AI/ML pipelines. 6. Isolate AI/ML workloads in sandboxed or containerized environments to limit the blast radius of potential exploitation. 7. Monitor security advisories and threat intelligence feeds for any emerging exploits targeting this vulnerability to respond promptly. 8. Educate development and security teams about the risks of unsafe eval usage and best practices for secure coding in AI frameworks.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
mitre
Date Reserved
2022-11-26T00:00:00.000Z
Cisa Enriched
true

Threat ID: 682d983ec4522896dcbeff03

Added to database: 5/21/2025, 9:09:18 AM

Last enriched: 6/22/2025, 5:22:35 AM

Last updated: 8/15/2025, 2:19:14 AM

Views: 14

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats