CVE-2026-42203: CWE-1336: Improper Neutralization of Special Elements Used in a Template Engine in BerriAI litellm
LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before version 1.83.7, the POST /prompts/test endpoint accepted user-supplied prompt templates and rendered them without sandboxing. A crafted template could run arbitrary code inside the LiteLLM Proxy process. The endpoint only checks that the caller presents a valid proxy API key, so any authenticated user could reach it. Depending on how the proxy is deployed, this could expose secrets in the process environment (such as provider API keys or database credentials) and allow commands to be run on the host. This issue has been patched in version 1.83.7.
AI Analysis
Technical Summary
LiteLLM is an AI Gateway proxy server that calls LLM APIs in OpenAI or native format. Between versions 1.80.5 and before 1.83.7, the POST /prompts/test endpoint improperly neutralizes special elements in user-supplied templates, lacking sandboxing during rendering. This allows any authenticated user with a valid proxy API key to execute arbitrary code inside the LiteLLM proxy process. The impact includes potential exposure of environment secrets and host command execution. The issue is fixed in version 1.83.7.
Potential Impact
An attacker with a valid proxy API key can exploit this vulnerability to run arbitrary code within the LiteLLM proxy process. This can lead to disclosure of sensitive environment variables such as provider API keys and database credentials, and potentially allow execution of commands on the host system. This elevates the risk of unauthorized access and control over the host running LiteLLM.
Mitigation Recommendations
Upgrade LiteLLM to version 1.83.7 or later, where this vulnerability has been patched. Until the upgrade, restrict access to the POST /prompts/test endpoint to trusted users only and monitor for any suspicious activity. Patch status is confirmed fixed in version 1.83.7.
CVE-2026-42203: CWE-1336: Improper Neutralization of Special Elements Used in a Template Engine in BerriAI litellm
Description
LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before version 1.83.7, the POST /prompts/test endpoint accepted user-supplied prompt templates and rendered them without sandboxing. A crafted template could run arbitrary code inside the LiteLLM Proxy process. The endpoint only checks that the caller presents a valid proxy API key, so any authenticated user could reach it. Depending on how the proxy is deployed, this could expose secrets in the process environment (such as provider API keys or database credentials) and allow commands to be run on the host. This issue has been patched in version 1.83.7.
AI-Powered Analysis
Machine-generated threat intelligence
Technical Analysis
LiteLLM is an AI Gateway proxy server that calls LLM APIs in OpenAI or native format. Between versions 1.80.5 and before 1.83.7, the POST /prompts/test endpoint improperly neutralizes special elements in user-supplied templates, lacking sandboxing during rendering. This allows any authenticated user with a valid proxy API key to execute arbitrary code inside the LiteLLM proxy process. The impact includes potential exposure of environment secrets and host command execution. The issue is fixed in version 1.83.7.
Potential Impact
An attacker with a valid proxy API key can exploit this vulnerability to run arbitrary code within the LiteLLM proxy process. This can lead to disclosure of sensitive environment variables such as provider API keys and database credentials, and potentially allow execution of commands on the host system. This elevates the risk of unauthorized access and control over the host running LiteLLM.
Mitigation Recommendations
Upgrade LiteLLM to version 1.83.7 or later, where this vulnerability has been patched. Until the upgrade, restrict access to the POST /prompts/test endpoint to trusted users only and monitor for any suspicious activity. Patch status is confirmed fixed in version 1.83.7.
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- GitHub_M
- Date Reserved
- 2026-04-25T05:04:37.027Z
- Cvss Version
- 4.0
- State
- PUBLISHED
- Remediation Level
- null
Threat ID: 69fd5dbdcbff5d86108b6457
Added to database: 5/8/2026, 3:51:25 AM
Last enriched: 5/8/2026, 4:07:00 AM
Last updated: 5/9/2026, 5:16:10 AM
Views: 6
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
Need more coverage?
Upgrade to Pro Console for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.
Latest Threats
Check if your credentials are on the dark web
Instant breach scanning across billions of leaked records. Free tier available.