Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-46722: CWE-1288: Improper Validation of Consistency within Input in vllm-project vllm

0
Medium
VulnerabilityCVE-2025-46722cvecve-2025-46722cwe-1288cwe-1023
Published: Thu May 29 2025 (05/29/2025, 16:36:12 UTC)
Source: CVE Database V5
Vendor/Project: vllm-project
Product: vllm

Description

vLLM is an inference and serving engine for large language models (LLMs). In versions starting from 0.7.0 to before 0.9.0, in the file vllm/multimodal/hasher.py, the MultiModalHasher class has a security and data integrity issue in its image hashing method. Currently, it serializes PIL.Image.Image objects using only obj.tobytes(), which returns only the raw pixel data, without including metadata such as the image’s shape (width, height, mode). As a result, two images of different sizes (e.g., 30x100 and 100x30) with the same pixel byte sequence could generate the same hash value. This may lead to hash collisions, incorrect cache hits, and even data leakage or security risks. This issue has been patched in version 0.9.0.

AI-Powered Analysis

AILast updated: 07/07/2025, 22:56:52 UTC

Technical Analysis

CVE-2025-46722 is a medium-severity vulnerability affecting the vLLM inference and serving engine for large language models (LLMs), specifically versions from 0.7.0 up to but not including 0.9.0. The vulnerability resides in the MultiModalHasher class within the file vllm/multimodal/hasher.py. The issue stems from the way image objects (PIL.Image.Image) are serialized for hashing. The current implementation uses only the raw pixel data obtained via obj.tobytes(), which excludes critical metadata such as image dimensions (width, height) and mode. This omission can cause distinct images with different sizes but identical raw pixel byte sequences to produce the same hash value. Such hash collisions can lead to incorrect cache hits, undermining data integrity and potentially causing data leakage or security risks. For example, an attacker might exploit this to retrieve or manipulate cached data associated with a different image, leading to unauthorized data exposure or corruption. The vulnerability does not require user interaction but does require low privileges to exploit, and the attack vector is network-based. The issue has been addressed in vLLM version 0.9.0 by presumably including image metadata in the hashing process to ensure uniqueness and consistency. The CVSS v3.1 base score is 4.2, reflecting a medium severity with limited confidentiality impact, no integrity impact, and low availability impact.

Potential Impact

For European organizations utilizing vLLM versions between 0.7.0 and 0.9.0, this vulnerability could undermine the reliability of image-based caching mechanisms within AI inference workflows. Incorrect cache hits due to hash collisions may lead to serving incorrect or stale data, potentially degrading the quality of AI model outputs or causing erroneous decisions in automated systems. In sensitive environments such as healthcare, finance, or critical infrastructure where LLMs might process multimodal data including images, this could result in data leakage or exposure of confidential information. Although the direct impact on integrity is limited, the risk of data leakage and availability degradation could affect compliance with European data protection regulations like GDPR. Additionally, organizations relying on vLLM for AI services may face operational disruptions or reputational damage if the vulnerability is exploited. However, the vulnerability requires network access and low privileges, which somewhat limits the attack surface. No known exploits are currently reported in the wild, reducing immediate risk but warranting proactive patching.

Mitigation Recommendations

European organizations should upgrade vLLM to version 0.9.0 or later, where the vulnerability is patched by incorporating image metadata into the hashing process to prevent collisions. Until upgrade is feasible, organizations should consider disabling or restricting the use of the MultiModalHasher component for image hashing or implement additional validation checks to verify image dimensions and metadata before caching. Employ network segmentation and strict access controls to limit exposure of vLLM services to trusted users and systems only. Monitoring cache hit/miss patterns for anomalies could help detect exploitation attempts. Additionally, organizations should review and audit AI inference pipelines that process images to ensure no sensitive data leakage occurs due to caching errors. Incorporating image hashing libraries with proven collision resistance or cryptographic hashing methods that include metadata can further mitigate risks. Finally, maintain up-to-date vulnerability management processes to promptly apply patches and monitor vendor advisories.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2025-04-28T20:56:09.084Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 68389281182aa0cae2860f37

Added to database: 5/29/2025, 4:59:45 PM

Last enriched: 7/7/2025, 10:56:52 PM

Last updated: 11/22/2025, 4:46:24 PM

Views: 50

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats