CVE-2026-27893: CWE-693: Protection Mechanism Failure in vllm-project vllm
vLLM versions from 0. 10. 1 up to but not including 0. 18. 0 contain a vulnerability where two model implementation files hardcode the setting trust_remote_code=True. This bypasses the user's explicit opt-out of remote code execution, enabling potential remote code execution via malicious model repositories. The issue is fixed in version 0. 18. 0.
AI Analysis
Technical Summary
The vLLM inference and serving engine for large language models has a protection mechanism failure (CWE-693) in versions >=0.10.1 and <0.18.0. Specifically, two model implementation files hardcode the parameter trust_remote_code=True when loading sub-components, ignoring the user's explicit security setting --trust-remote-code=False. This flaw allows an attacker to execute arbitrary code remotely by supplying malicious model repositories, even when users have disabled remote code trust. The vulnerability is addressed in vLLM version 0.18.0.
Potential Impact
Successful exploitation allows remote code execution with high impact on confidentiality, integrity, and availability. Attackers can run arbitrary code on the victim's system by leveraging malicious model repositories, bypassing user security preferences.
Mitigation Recommendations
Upgrade to vLLM version 0.18.0 or later, where this issue is patched. Users relying on earlier versions should update promptly to eliminate the vulnerability. No other mitigations are indicated by the vendor advisory.
CVE-2026-27893: CWE-693: Protection Mechanism Failure in vllm-project vllm
Description
vLLM versions from 0. 10. 1 up to but not including 0. 18. 0 contain a vulnerability where two model implementation files hardcode the setting trust_remote_code=True. This bypasses the user's explicit opt-out of remote code execution, enabling potential remote code execution via malicious model repositories. The issue is fixed in version 0. 18. 0.
AI-Powered Analysis
Machine-generated threat intelligence
Technical Analysis
The vLLM inference and serving engine for large language models has a protection mechanism failure (CWE-693) in versions >=0.10.1 and <0.18.0. Specifically, two model implementation files hardcode the parameter trust_remote_code=True when loading sub-components, ignoring the user's explicit security setting --trust-remote-code=False. This flaw allows an attacker to execute arbitrary code remotely by supplying malicious model repositories, even when users have disabled remote code trust. The vulnerability is addressed in vLLM version 0.18.0.
Potential Impact
Successful exploitation allows remote code execution with high impact on confidentiality, integrity, and availability. Attackers can run arbitrary code on the victim's system by leveraging malicious model repositories, bypassing user security preferences.
Mitigation Recommendations
Upgrade to vLLM version 0.18.0 or later, where this issue is patched. Users relying on earlier versions should update promptly to eliminate the vulnerability. No other mitigations are indicated by the vendor advisory.
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- GitHub_M
- Date Reserved
- 2026-02-24T15:19:29.717Z
- Cvss Version
- 3.1
- State
- PUBLISHED
Threat ID: 69c5c8713c064ed76fe63c5e
Added to database: 3/26/2026, 11:59:45 PM
Last enriched: 4/3/2026, 1:41:28 PM
Last updated: 5/11/2026, 6:09:31 AM
Views: 300
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
Need more coverage?
Upgrade to Pro Console for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.
Latest Threats
Check if your credentials are on the dark web
Instant breach scanning across billions of leaked records. Free tier available.