Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2026-42203: CWE-1336: Improper Neutralization of Special Elements Used in a Template Engine in BerriAI litellm

0
High
VulnerabilityCVE-2026-42203cvecve-2026-42203cwe-1336
Published: Fri May 08 2026 (05/08/2026, 03:36:58 UTC)
Source: CVE Database V5
Vendor/Project: BerriAI
Product: litellm

Description

LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. From version 1.80.5 to before version 1.83.7, the POST /prompts/test endpoint accepted user-supplied prompt templates and rendered them without sandboxing. A crafted template could run arbitrary code inside the LiteLLM Proxy process. The endpoint only checks that the caller presents a valid proxy API key, so any authenticated user could reach it. Depending on how the proxy is deployed, this could expose secrets in the process environment (such as provider API keys or database credentials) and allow commands to be run on the host. This issue has been patched in version 1.83.7.

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 05/08/2026, 04:07:00 UTC

Technical Analysis

LiteLLM is an AI Gateway proxy server that calls LLM APIs in OpenAI or native format. Between versions 1.80.5 and before 1.83.7, the POST /prompts/test endpoint improperly neutralizes special elements in user-supplied templates, lacking sandboxing during rendering. This allows any authenticated user with a valid proxy API key to execute arbitrary code inside the LiteLLM proxy process. The impact includes potential exposure of environment secrets and host command execution. The issue is fixed in version 1.83.7.

Potential Impact

An attacker with a valid proxy API key can exploit this vulnerability to run arbitrary code within the LiteLLM proxy process. This can lead to disclosure of sensitive environment variables such as provider API keys and database credentials, and potentially allow execution of commands on the host system. This elevates the risk of unauthorized access and control over the host running LiteLLM.

Mitigation Recommendations

Upgrade LiteLLM to version 1.83.7 or later, where this vulnerability has been patched. Until the upgrade, restrict access to the POST /prompts/test endpoint to trusted users only and monitor for any suspicious activity. Patch status is confirmed fixed in version 1.83.7.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2026-04-25T05:04:37.027Z
Cvss Version
4.0
State
PUBLISHED
Remediation Level
null

Threat ID: 69fd5dbdcbff5d86108b6457

Added to database: 5/8/2026, 3:51:25 AM

Last enriched: 5/8/2026, 4:07:00 AM

Last updated: 5/9/2026, 5:16:10 AM

Views: 6

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses