CVE-2025-46722: CWE-1288: Improper Validation of Consistency within Input in vllm-project vllm
vLLM is an inference and serving engine for large language models (LLMs). In versions starting from 0.7.0 to before 0.9.0, in the file vllm/multimodal/hasher.py, the MultiModalHasher class has a security and data integrity issue in its image hashing method. Currently, it serializes PIL.Image.Image objects using only obj.tobytes(), which returns only the raw pixel data, without including metadata such as the image’s shape (width, height, mode). As a result, two images of different sizes (e.g., 30x100 and 100x30) with the same pixel byte sequence could generate the same hash value. This may lead to hash collisions, incorrect cache hits, and even data leakage or security risks. This issue has been patched in version 0.9.0.
AI Analysis
Technical Summary
CVE-2025-46722 is a medium-severity vulnerability affecting the vLLM inference and serving engine for large language models (LLMs), specifically versions from 0.7.0 up to but not including 0.9.0. The vulnerability resides in the MultiModalHasher class within the file vllm/multimodal/hasher.py. The issue stems from the way image objects (PIL.Image.Image) are serialized for hashing. The current implementation uses only the raw pixel data obtained via obj.tobytes(), which excludes critical metadata such as image dimensions (width, height) and mode. This omission can cause distinct images with different sizes but identical raw pixel byte sequences to produce the same hash value. Such hash collisions can lead to incorrect cache hits, undermining data integrity and potentially causing data leakage or security risks. For example, an attacker might exploit this to retrieve or manipulate cached data associated with a different image, leading to unauthorized data exposure or corruption. The vulnerability does not require user interaction but does require low privileges to exploit, and the attack vector is network-based. The issue has been addressed in vLLM version 0.9.0 by presumably including image metadata in the hashing process to ensure uniqueness and consistency. The CVSS v3.1 base score is 4.2, reflecting a medium severity with limited confidentiality impact, no integrity impact, and low availability impact.
Potential Impact
For European organizations utilizing vLLM versions between 0.7.0 and 0.9.0, this vulnerability could undermine the reliability of image-based caching mechanisms within AI inference workflows. Incorrect cache hits due to hash collisions may lead to serving incorrect or stale data, potentially degrading the quality of AI model outputs or causing erroneous decisions in automated systems. In sensitive environments such as healthcare, finance, or critical infrastructure where LLMs might process multimodal data including images, this could result in data leakage or exposure of confidential information. Although the direct impact on integrity is limited, the risk of data leakage and availability degradation could affect compliance with European data protection regulations like GDPR. Additionally, organizations relying on vLLM for AI services may face operational disruptions or reputational damage if the vulnerability is exploited. However, the vulnerability requires network access and low privileges, which somewhat limits the attack surface. No known exploits are currently reported in the wild, reducing immediate risk but warranting proactive patching.
Mitigation Recommendations
European organizations should upgrade vLLM to version 0.9.0 or later, where the vulnerability is patched by incorporating image metadata into the hashing process to prevent collisions. Until upgrade is feasible, organizations should consider disabling or restricting the use of the MultiModalHasher component for image hashing or implement additional validation checks to verify image dimensions and metadata before caching. Employ network segmentation and strict access controls to limit exposure of vLLM services to trusted users and systems only. Monitoring cache hit/miss patterns for anomalies could help detect exploitation attempts. Additionally, organizations should review and audit AI inference pipelines that process images to ensure no sensitive data leakage occurs due to caching errors. Incorporating image hashing libraries with proven collision resistance or cryptographic hashing methods that include metadata can further mitigate risks. Finally, maintain up-to-date vulnerability management processes to promptly apply patches and monitor vendor advisories.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Italy, Spain
CVE-2025-46722: CWE-1288: Improper Validation of Consistency within Input in vllm-project vllm
Description
vLLM is an inference and serving engine for large language models (LLMs). In versions starting from 0.7.0 to before 0.9.0, in the file vllm/multimodal/hasher.py, the MultiModalHasher class has a security and data integrity issue in its image hashing method. Currently, it serializes PIL.Image.Image objects using only obj.tobytes(), which returns only the raw pixel data, without including metadata such as the image’s shape (width, height, mode). As a result, two images of different sizes (e.g., 30x100 and 100x30) with the same pixel byte sequence could generate the same hash value. This may lead to hash collisions, incorrect cache hits, and even data leakage or security risks. This issue has been patched in version 0.9.0.
AI-Powered Analysis
Technical Analysis
CVE-2025-46722 is a medium-severity vulnerability affecting the vLLM inference and serving engine for large language models (LLMs), specifically versions from 0.7.0 up to but not including 0.9.0. The vulnerability resides in the MultiModalHasher class within the file vllm/multimodal/hasher.py. The issue stems from the way image objects (PIL.Image.Image) are serialized for hashing. The current implementation uses only the raw pixel data obtained via obj.tobytes(), which excludes critical metadata such as image dimensions (width, height) and mode. This omission can cause distinct images with different sizes but identical raw pixel byte sequences to produce the same hash value. Such hash collisions can lead to incorrect cache hits, undermining data integrity and potentially causing data leakage or security risks. For example, an attacker might exploit this to retrieve or manipulate cached data associated with a different image, leading to unauthorized data exposure or corruption. The vulnerability does not require user interaction but does require low privileges to exploit, and the attack vector is network-based. The issue has been addressed in vLLM version 0.9.0 by presumably including image metadata in the hashing process to ensure uniqueness and consistency. The CVSS v3.1 base score is 4.2, reflecting a medium severity with limited confidentiality impact, no integrity impact, and low availability impact.
Potential Impact
For European organizations utilizing vLLM versions between 0.7.0 and 0.9.0, this vulnerability could undermine the reliability of image-based caching mechanisms within AI inference workflows. Incorrect cache hits due to hash collisions may lead to serving incorrect or stale data, potentially degrading the quality of AI model outputs or causing erroneous decisions in automated systems. In sensitive environments such as healthcare, finance, or critical infrastructure where LLMs might process multimodal data including images, this could result in data leakage or exposure of confidential information. Although the direct impact on integrity is limited, the risk of data leakage and availability degradation could affect compliance with European data protection regulations like GDPR. Additionally, organizations relying on vLLM for AI services may face operational disruptions or reputational damage if the vulnerability is exploited. However, the vulnerability requires network access and low privileges, which somewhat limits the attack surface. No known exploits are currently reported in the wild, reducing immediate risk but warranting proactive patching.
Mitigation Recommendations
European organizations should upgrade vLLM to version 0.9.0 or later, where the vulnerability is patched by incorporating image metadata into the hashing process to prevent collisions. Until upgrade is feasible, organizations should consider disabling or restricting the use of the MultiModalHasher component for image hashing or implement additional validation checks to verify image dimensions and metadata before caching. Employ network segmentation and strict access controls to limit exposure of vLLM services to trusted users and systems only. Monitoring cache hit/miss patterns for anomalies could help detect exploitation attempts. Additionally, organizations should review and audit AI inference pipelines that process images to ensure no sensitive data leakage occurs due to caching errors. Incorporating image hashing libraries with proven collision resistance or cryptographic hashing methods that include metadata can further mitigate risks. Finally, maintain up-to-date vulnerability management processes to promptly apply patches and monitor vendor advisories.
Affected Countries
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- GitHub_M
- Date Reserved
- 2025-04-28T20:56:09.084Z
- Cvss Version
- 3.1
- State
- PUBLISHED
Threat ID: 68389281182aa0cae2860f37
Added to database: 5/29/2025, 4:59:45 PM
Last enriched: 7/7/2025, 10:56:52 PM
Last updated: 8/5/2025, 2:29:09 PM
Views: 11
Related Threats
CVE-2025-49568: Use After Free (CWE-416) in Adobe Illustrator
MediumCVE-2025-49567: NULL Pointer Dereference (CWE-476) in Adobe Illustrator
MediumCVE-2025-49564: Stack-based Buffer Overflow (CWE-121) in Adobe Illustrator
HighCVE-2025-49563: Out-of-bounds Write (CWE-787) in Adobe Illustrator
HighCVE-2025-32086: Escalation of Privilege in Intel(R) Xeon(R) 6 Processors when using Intel(R) SGX or Intel(R) TDX
MediumActions
Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.