Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2026-22778: CWE-532: Insertion of Sensitive Information into Log File in vllm-project vllm

0
Critical
VulnerabilityCVE-2026-22778cvecve-2026-22778cwe-532
Published: Mon Feb 02 2026 (02/02/2026, 21:09:53 UTC)
Source: CVE Database V5
Vendor/Project: vllm-project
Product: vllm

Description

vLLM is an inference and serving engine for large language models (LLMs). From 0.8.3 to before 0.14.1, when an invalid image is sent to vLLM's multimodal endpoint, PIL throws an error. vLLM returns this error to the client, leaking a heap address. With this leak, we reduce ASLR from 4 billion guesses to ~8 guesses. This vulnerability can be chained a heap overflow with JPEG2000 decoder in OpenCV/FFmpeg to achieve remote code execution. This vulnerability is fixed in 0.14.1.

AI-Powered Analysis

AILast updated: 02/02/2026, 23:33:40 UTC

Technical Analysis

CVE-2026-22778 is a critical security vulnerability classified under CWE-532, involving the insertion of sensitive information into log files within the vllm-project's vllm inference engine for large language models. The vulnerability exists in versions 0.8.3 through 0.14.0. When an invalid image is sent to vllm's multimodal endpoint, the Python Imaging Library (PIL) throws an error that is returned to the client, inadvertently leaking a heap memory address. This leak significantly reduces the entropy of Address Space Layout Randomization (ASLR) from approximately 4 billion possible guesses to about 8, making it trivial for an attacker to predict memory locations. This information leak can be chained with a known heap overflow vulnerability in the JPEG2000 decoder used by OpenCV/FFmpeg, enabling an attacker to execute arbitrary code remotely on the affected system. The attack requires no privileges or user interaction, increasing its risk. The vulnerability was publicly disclosed and assigned a CVSS v3.1 score of 9.8 (critical), reflecting its high impact on confidentiality, integrity, and availability. The issue is resolved in vllm version 0.14.1, which no longer leaks heap addresses in error messages and mitigates the attack vector. No known exploits are reported in the wild yet, but the ease of exploitation and critical impact necessitate urgent remediation.

Potential Impact

For European organizations, this vulnerability poses a severe risk, especially those leveraging vllm for AI inference and serving large language models with multimodal capabilities. The ability to remotely execute code without authentication or user interaction can lead to full system compromise, data breaches, and disruption of AI services. Confidentiality is at high risk due to heap address leakage, which can be exploited to bypass ASLR and escalate attacks. Integrity and availability are also threatened by potential arbitrary code execution, which could allow attackers to manipulate AI model outputs, corrupt data, or cause denial of service. Organizations in sectors such as finance, healthcare, and critical infrastructure that use AI models for decision-making or automation could face operational and reputational damage. The vulnerability's exploitation could also facilitate lateral movement within networks, increasing the overall attack surface. Given the increasing adoption of AI technologies across Europe, the threat is widespread and urgent.

Mitigation Recommendations

1. Immediately upgrade all vllm deployments to version 0.14.1 or later, where the vulnerability is fixed. 2. Implement strict input validation and sanitization on all image inputs sent to the multimodal endpoint to prevent malformed or invalid images from triggering errors. 3. Employ network-level protections such as Web Application Firewalls (WAFs) to detect and block suspicious payloads targeting the multimodal endpoint. 4. Monitor logs and network traffic for unusual error messages or patterns indicative of exploitation attempts. 5. Restrict access to the vllm multimodal endpoint to trusted users and networks, using authentication and network segmentation where possible. 6. Regularly update and patch dependencies like OpenCV and FFmpeg to their latest secure versions to mitigate chained vulnerabilities. 7. Conduct security assessments and penetration testing focused on AI inference services to identify and remediate similar issues proactively. 8. Educate development and operations teams about secure error handling practices to avoid leaking sensitive information in logs or responses.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2026-01-09T18:27:19.388Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 69813004f9fa50a62f63a39d

Added to database: 2/2/2026, 11:15:16 PM

Last enriched: 2/2/2026, 11:33:40 PM

Last updated: 2/7/2026, 10:46:11 AM

Views: 33

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats