Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2026-27893: CWE-693: Protection Mechanism Failure in vllm-project vllm

0
High
VulnerabilityCVE-2026-27893cvecve-2026-27893cwe-693
Published: Thu Mar 26 2026 (03/26/2026, 23:56:53 UTC)
Source: CVE Database V5
Vendor/Project: vllm-project
Product: vllm

Description

CVE-2026-27893 is a high-severity vulnerability in the vllm-project's vllm inference engine for large language models. Versions from 0. 10. 1 up to but not including 0. 18. 0 contain two model implementation files that hardcode the setting trust_remote_code=True, overriding the user's explicit opt-out of remote code execution. This flaw allows attackers to execute arbitrary code remotely by supplying malicious model repositories, even when users have disabled remote code trust. The vulnerability affects confidentiality, integrity, and availability, as attackers can run arbitrary code on affected systems. Exploitation requires user interaction to load a malicious model but no prior authentication. The issue is patched in version 0.

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 03/27/2026, 00:15:22 UTC

Technical Analysis

CVE-2026-27893 is a critical security vulnerability identified in the vllm-project's vllm software, an inference and serving engine for large language models (LLMs). The vulnerability arises because two model implementation files within versions 0.10.1 through 0.17.x hardcode the parameter trust_remote_code=True when loading sub-components of models. This hardcoding bypasses the user's explicit security setting --trust-remote-code=False, which is intended to prevent execution of untrusted remote code. As a result, even if users opt out of trusting remote code, the software will still execute potentially malicious code embedded in model repositories. This leads to remote code execution (RCE) vulnerabilities, enabling attackers to run arbitrary code on the host system. The vulnerability is classified under CWE-693 (Protection Mechanism Failure), indicating a failure in enforcing security controls as designed. The CVSS v3.1 score is 8.8 (high severity), reflecting the vulnerability's network attack vector, low attack complexity, no privileges required, user interaction needed, and high impact on confidentiality, integrity, and availability. No known exploits are reported in the wild yet, but the risk is significant given the widespread adoption of LLM inference engines. The issue is resolved in vllm version 0.18.0, which removes the hardcoded trust_remote_code parameter, restoring user control over remote code execution policies. This vulnerability highlights the critical need for strict enforcement of user-configured security options in AI model serving frameworks, especially as they increasingly integrate third-party or remote model components.

Potential Impact

The impact of CVE-2026-27893 is substantial for organizations deploying the vulnerable versions of vllm. Successful exploitation allows attackers to execute arbitrary code remotely on the inference server, potentially leading to full system compromise. This jeopardizes confidentiality by exposing sensitive data processed by the LLM, integrity by allowing manipulation of model outputs or system files, and availability by enabling denial-of-service or destructive actions. Since vllm is used to serve large language models, attackers could leverage this to inject malicious payloads, pivot within networks, or disrupt AI-driven services. The vulnerability circumvents explicit user security settings, increasing the risk of unnoticed exploitation. Organizations relying on LLM inference for critical applications, including research, customer service, or automation, face operational disruption and reputational damage. The lack of authentication requirement and network accessibility further widen the attack surface. Although no exploits are currently known in the wild, the high CVSS score and ease of exploitation necessitate urgent remediation to prevent potential attacks.

Mitigation Recommendations

To mitigate CVE-2026-27893, organizations should immediately upgrade vllm to version 0.18.0 or later, where the hardcoded trust_remote_code parameter is removed. Until upgrading is possible, users should avoid loading models from untrusted or unknown remote repositories. Implement strict validation and whitelisting of model sources to prevent malicious code injection. Employ network segmentation and firewall rules to restrict access to vllm inference servers, limiting exposure to untrusted networks. Monitor logs and system behavior for unusual activity indicative of exploitation attempts. Consider running vllm within isolated containers or sandboxed environments to contain potential compromises. Additionally, enforce strict user policies disallowing the use of the --trust-remote-code=True flag unless absolutely necessary and verified. Regularly audit configurations and update dependencies to ensure no residual vulnerable versions remain in production. Finally, maintain up-to-date threat intelligence to detect emerging exploits targeting this vulnerability.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2026-02-24T15:19:29.717Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 69c5c8713c064ed76fe63c5e

Added to database: 3/26/2026, 11:59:45 PM

Last enriched: 3/27/2026, 12:15:22 AM

Last updated: 3/27/2026, 1:15:30 AM

Views: 5

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses