CVE-2025-46560: CWE-1333: Inefficient Regular Expression Complexity in vllm-project vllm
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to inefficient list concatenation operations, the algorithm exhibits quadratic time complexity (O(n²)), allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.
AI Analysis
Technical Summary
CVE-2025-46560 is a vulnerability affecting the vllm-project's vllm software, specifically versions from 0.8.0 up to but not including 0.8.5. vllm is a high-throughput, memory-efficient inference and serving engine designed for large language models (LLMs), which includes a multimodal tokenizer component responsible for preprocessing input data. The vulnerability arises in the input preprocessing logic where placeholder tokens such as <|audio_|> and <|image_|> are dynamically replaced with repeated tokens based on precomputed lengths. The implementation uses inefficient list concatenation operations that result in quadratic time complexity (O(n²)) during this replacement process. This inefficiency can be exploited by an attacker who crafts specially designed inputs that trigger excessive resource consumption, leading to denial-of-service (DoS) conditions due to CPU exhaustion or memory strain. The vulnerability does not affect confidentiality or integrity but severely impacts availability. It requires network access and low privileges (PR:L) but does not require user interaction (UI:N). The vulnerability has been assigned a CVSS v3.1 score of 6.5, categorized as medium severity, reflecting its significant impact on availability with relatively low complexity to exploit. No known exploits are currently reported in the wild. The issue has been addressed in version 0.8.5 of vllm, and users are advised to upgrade to this or later versions to mitigate the risk.
Potential Impact
For European organizations utilizing vllm versions between 0.8.0 and 0.8.5, this vulnerability poses a risk of service disruption through denial-of-service attacks. Given vllm's role in serving large language models, which may be integrated into AI-driven applications, chatbots, or data processing pipelines, an attacker could degrade or halt critical AI services by sending maliciously crafted inputs. This could affect sectors relying heavily on AI inference such as finance, healthcare, telecommunications, and public services. The impact is primarily on availability, potentially causing downtime, delayed responses, or increased operational costs due to resource exhaustion. Since the vulnerability requires only low privileges and no user interaction, it could be exploited by internal or external actors with network access. The absence of confidentiality or integrity impact reduces risks related to data breaches or manipulation but does not diminish the operational disruption threat. Organizations with AI infrastructure based on vllm should consider the potential for targeted DoS attacks, especially in environments where uptime and responsiveness are critical.
Mitigation Recommendations
1. Immediate upgrade to vllm version 0.8.5 or later, where the vulnerability is patched, is the most effective mitigation. 2. Implement input validation and rate limiting on all interfaces that accept inputs to the multimodal tokenizer to detect and block unusually large or repetitive placeholder token sequences that could trigger the quadratic complexity. 3. Deploy resource monitoring and anomaly detection tools to identify unusual CPU or memory usage spikes associated with vllm processes, enabling rapid response to potential exploitation attempts. 4. Isolate vllm inference services within segmented network zones with strict access controls to limit exposure to untrusted users or external networks. 5. For organizations unable to upgrade immediately, consider applying temporary mitigations such as limiting input size or disabling multimodal token processing if feasible. 6. Maintain up-to-date logging and alerting on vllm service performance metrics to detect early signs of exploitation. 7. Engage with the vllm community or vendor for any backported patches or additional security advisories.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland, Denmark, Ireland
CVE-2025-46560: CWE-1333: Inefficient Regular Expression Complexity in vllm-project vllm
Description
vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., <|audio_|>, <|image_|>) with repeated tokens based on precomputed lengths. Due to inefficient list concatenation operations, the algorithm exhibits quadratic time complexity (O(n²)), allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.
AI-Powered Analysis
Technical Analysis
CVE-2025-46560 is a vulnerability affecting the vllm-project's vllm software, specifically versions from 0.8.0 up to but not including 0.8.5. vllm is a high-throughput, memory-efficient inference and serving engine designed for large language models (LLMs), which includes a multimodal tokenizer component responsible for preprocessing input data. The vulnerability arises in the input preprocessing logic where placeholder tokens such as <|audio_|> and <|image_|> are dynamically replaced with repeated tokens based on precomputed lengths. The implementation uses inefficient list concatenation operations that result in quadratic time complexity (O(n²)) during this replacement process. This inefficiency can be exploited by an attacker who crafts specially designed inputs that trigger excessive resource consumption, leading to denial-of-service (DoS) conditions due to CPU exhaustion or memory strain. The vulnerability does not affect confidentiality or integrity but severely impacts availability. It requires network access and low privileges (PR:L) but does not require user interaction (UI:N). The vulnerability has been assigned a CVSS v3.1 score of 6.5, categorized as medium severity, reflecting its significant impact on availability with relatively low complexity to exploit. No known exploits are currently reported in the wild. The issue has been addressed in version 0.8.5 of vllm, and users are advised to upgrade to this or later versions to mitigate the risk.
Potential Impact
For European organizations utilizing vllm versions between 0.8.0 and 0.8.5, this vulnerability poses a risk of service disruption through denial-of-service attacks. Given vllm's role in serving large language models, which may be integrated into AI-driven applications, chatbots, or data processing pipelines, an attacker could degrade or halt critical AI services by sending maliciously crafted inputs. This could affect sectors relying heavily on AI inference such as finance, healthcare, telecommunications, and public services. The impact is primarily on availability, potentially causing downtime, delayed responses, or increased operational costs due to resource exhaustion. Since the vulnerability requires only low privileges and no user interaction, it could be exploited by internal or external actors with network access. The absence of confidentiality or integrity impact reduces risks related to data breaches or manipulation but does not diminish the operational disruption threat. Organizations with AI infrastructure based on vllm should consider the potential for targeted DoS attacks, especially in environments where uptime and responsiveness are critical.
Mitigation Recommendations
1. Immediate upgrade to vllm version 0.8.5 or later, where the vulnerability is patched, is the most effective mitigation. 2. Implement input validation and rate limiting on all interfaces that accept inputs to the multimodal tokenizer to detect and block unusually large or repetitive placeholder token sequences that could trigger the quadratic complexity. 3. Deploy resource monitoring and anomaly detection tools to identify unusual CPU or memory usage spikes associated with vllm processes, enabling rapid response to potential exploitation attempts. 4. Isolate vllm inference services within segmented network zones with strict access controls to limit exposure to untrusted users or external networks. 5. For organizations unable to upgrade immediately, consider applying temporary mitigations such as limiting input size or disabling multimodal token processing if feasible. 6. Maintain up-to-date logging and alerting on vllm service performance metrics to detect early signs of exploitation. 7. Engage with the vllm community or vendor for any backported patches or additional security advisories.
Affected Countries
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.1
- Assigner Short Name
- GitHub_M
- Date Reserved
- 2025-04-24T21:10:48.174Z
- Cisa Enriched
- true
- Cvss Version
- 3.1
- State
- PUBLISHED
Threat ID: 682d983bc4522896dcbee332
Added to database: 5/21/2025, 9:09:15 AM
Last enriched: 6/25/2025, 5:49:33 AM
Last updated: 7/29/2025, 4:27:00 PM
Views: 14
Related Threats
CVE-2025-9053: SQL Injection in projectworlds Travel Management System
MediumCVE-2025-9052: SQL Injection in projectworlds Travel Management System
MediumCVE-2025-9019: Heap-based Buffer Overflow in tcpreplay
LowCVE-2025-9017: Cross Site Scripting in PHPGurukul Zoo Management System
MediumCVE-2025-9051: SQL Injection in projectworlds Travel Management System
MediumActions
Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.
External Links
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.