Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2026-34760: CWE-20: Improper Input Validation in vllm-project vllm

0
Medium
VulnerabilityCVE-2026-34760cvecve-2026-34760cwe-20
Published: Thu Apr 02 2026 (04/02/2026, 18:59:49 UTC)
Source: CVE Database V5
Vendor/Project: vllm-project
Product: vllm

Description

vLLM is an inference and serving engine for large language models (LLMs). From version 0.5.5 to before version 0.18.0, Librosa defaults to using numpy.mean for mono downmixing (to_mono), while the international standard ITU-R BS.775-4 specifies a weighted downmixing algorithm. This discrepancy results in inconsistency between audio heard by humans (e.g., through headphones/regular speakers) and audio processed by AI models (Which infra via Librosa, such as vllm, transformer). This issue has been patched in version 0.18.0.

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 04/02/2026, 19:56:09 UTC

Technical Analysis

CVE-2026-34760 identifies an improper input validation vulnerability (CWE-20) in the vllm-project's vllm, an inference and serving engine for large language models (LLMs). The root cause lies in the audio downmixing process handled by the Librosa library, which vllm uses for audio preprocessing. Between versions 0.5.5 and before 0.18.0, Librosa defaults to using numpy.mean for converting stereo audio to mono (to_mono), which is a simple arithmetic mean. However, the international standard ITU-R BS.775-4 specifies a weighted downmixing algorithm that better reflects human auditory perception by weighting channels differently. This discrepancy means that the audio input processed by AI models differs from what humans actually hear, potentially causing AI inference results to be inaccurate or inconsistent. This is a form of integrity compromise, as the AI model's input data does not faithfully represent the real-world audio environment. The vulnerability has a CVSS 3.1 base score of 5.9 (medium severity), with an attack vector of network, high attack complexity, low privileges required, no user interaction, and impacts integrity and availability but not confidentiality. The issue was publicly disclosed on April 2, 2026, and has been fixed in vllm version 0.18.0. No known exploits have been reported in the wild. This vulnerability is particularly relevant for organizations relying on vllm for audio-based AI inference, such as speech recognition, audio classification, or other LLM applications involving audio inputs.

Potential Impact

The primary impact of CVE-2026-34760 is on the integrity of AI inference results in applications using the affected versions of vllm. Because the audio downmixing does not conform to the ITU-R BS.775-4 standard, the AI models receive audio inputs that differ from what humans perceive, potentially leading to inaccurate or inconsistent outputs. This can degrade the quality and reliability of AI-driven services such as voice assistants, transcription services, audio analytics, and other LLM-based audio applications. While confidentiality is not affected, the integrity compromise can undermine trust in AI outputs and cause operational disruptions if decisions are based on flawed audio data. Availability impact is low but possible if downstream systems rely on accurate audio processing and fail due to inconsistent inputs. The attack complexity is high, and exploitation requires network access with low privileges, limiting the ease of exploitation. No user interaction is needed, which means automated attacks could be possible if the vulnerability is exposed. Organizations that have integrated vllm into critical audio processing pipelines may experience degraded service quality or erroneous AI behavior until patched.

Mitigation Recommendations

To mitigate CVE-2026-34760, organizations should upgrade vllm to version 0.18.0 or later, where the issue has been patched by aligning Librosa's downmixing method with the ITU-R BS.775-4 standard. For environments where immediate upgrading is not feasible, a temporary mitigation is to manually override the audio downmixing process in the preprocessing pipeline to implement the weighted downmixing algorithm as per ITU-R BS.775-4. This requires development effort to replace numpy.mean with a compliant weighted sum of audio channels. Additionally, organizations should audit their AI inference pipelines to verify that audio inputs are processed correctly and validate outputs against expected results to detect anomalies caused by this vulnerability. Monitoring network access to vllm services and restricting access to trusted users can reduce exposure. Finally, maintain up-to-date dependency management to ensure that underlying libraries like Librosa are also kept current to prevent similar issues.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2026-03-30T19:17:10.225Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 69cec5aae6bfc5ba1dfbd810

Added to database: 4/2/2026, 7:38:18 PM

Last enriched: 4/2/2026, 7:56:09 PM

Last updated: 4/9/2026, 6:47:12 AM

Views: 83

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses