CVE-2026-42271: CWE-77: Improper Neutralization of Special Elements used in a Command ('Command Injection') in BerriAI litellm
LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.74.2 to before version 1.83.7, two endpoints used to preview an MCP server before saving it — POST /mcp-rest/test/connection and POST /mcp-rest/test/tools/list — accepted a full server configuration in the request body, including the command, args, and env fields used by the stdio transport. When called with a stdio configuration, the endpoints attempted to connect, which spawned the supplied command as a subprocess on the proxy host with the privileges of the proxy process. The endpoints were gated only by a valid proxy API key, with no role check. Any authenticated user — including holders of low-privilege internal-user keys — could therefore run arbitrary commands on the host. This issue has been patched in version 1.83.7.
AI Analysis
Technical Summary
LiteLLM versions from 1.74.2 to before 1.83.7 contain a command injection vulnerability (CWE-77) in two POST endpoints (/mcp-rest/test/connection and /mcp-rest/test/tools/list). These endpoints accepted full server configurations including command, args, and env fields for stdio transport. When invoked, the proxy spawned the supplied command as a subprocess with the proxy process's privileges. Authentication was required but only a valid proxy API key was needed, with no role-based access control, allowing low-privilege users to execute arbitrary commands on the host. This vulnerability was fixed in version 1.83.7.
Potential Impact
An authenticated user with a valid proxy API key, including those with low-privilege internal-user keys, could execute arbitrary commands on the proxy host. This could lead to full compromise of the host environment running LiteLLM, potentially allowing data theft, service disruption, or further lateral movement. The high CVSS score reflects the network attack vector, low attack complexity, and high impact on confidentiality, integrity, and availability.
Mitigation Recommendations
Upgrade LiteLLM to version 1.83.7 or later where this vulnerability is patched. Until upgrading, restrict access to the proxy API keys to trusted users only and monitor for suspicious activity. There is no official fix other than upgrading. Patch status is confirmed by the vendor's versioning information.
CVE-2026-42271: CWE-77: Improper Neutralization of Special Elements used in a Command ('Command Injection') in BerriAI litellm
Description
LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.74.2 to before version 1.83.7, two endpoints used to preview an MCP server before saving it — POST /mcp-rest/test/connection and POST /mcp-rest/test/tools/list — accepted a full server configuration in the request body, including the command, args, and env fields used by the stdio transport. When called with a stdio configuration, the endpoints attempted to connect, which spawned the supplied command as a subprocess on the proxy host with the privileges of the proxy process. The endpoints were gated only by a valid proxy API key, with no role check. Any authenticated user — including holders of low-privilege internal-user keys — could therefore run arbitrary commands on the host. This issue has been patched in version 1.83.7.
AI-Powered Analysis
Machine-generated threat intelligence
Technical Analysis
LiteLLM versions from 1.74.2 to before 1.83.7 contain a command injection vulnerability (CWE-77) in two POST endpoints (/mcp-rest/test/connection and /mcp-rest/test/tools/list). These endpoints accepted full server configurations including command, args, and env fields for stdio transport. When invoked, the proxy spawned the supplied command as a subprocess with the proxy process's privileges. Authentication was required but only a valid proxy API key was needed, with no role-based access control, allowing low-privilege users to execute arbitrary commands on the host. This vulnerability was fixed in version 1.83.7.
Potential Impact
An authenticated user with a valid proxy API key, including those with low-privilege internal-user keys, could execute arbitrary commands on the proxy host. This could lead to full compromise of the host environment running LiteLLM, potentially allowing data theft, service disruption, or further lateral movement. The high CVSS score reflects the network attack vector, low attack complexity, and high impact on confidentiality, integrity, and availability.
Mitigation Recommendations
Upgrade LiteLLM to version 1.83.7 or later where this vulnerability is patched. Until upgrading, restrict access to the proxy API keys to trusted users only and monitor for suspicious activity. There is no official fix other than upgrading. Patch status is confirmed by the vendor's versioning information.
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- GitHub_M
- Date Reserved
- 2026-04-26T11:53:27.707Z
- Cvss Version
- 4.0
- State
- PUBLISHED
- Remediation Level
- null
Threat ID: 69fd5dbdcbff5d86108b6463
Added to database: 5/8/2026, 3:51:25 AM
Last enriched: 5/8/2026, 4:06:43 AM
Last updated: 5/9/2026, 3:49:24 AM
Views: 5
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
Need more coverage?
Upgrade to Pro Console for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.
Latest Threats
Check if your credentials are on the dark web
Instant breach scanning across billions of leaked records. Free tier available.