CVE-2026-35029: CWE-863: Incorrect Authorization in BerriAI litellm
LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. Prior to 1.83.0, the /config/update endpoint does not enforce admin role authorization. A user who is already authenticated into the platform can then use this endpoint to modify proxy configuration and environment variables, register custom pass-through endpoint handlers pointing to attacker-controlled Python code, achieving remote code execution, read arbitrary server files by setting UI_LOGO_PATH and fetching via /get_image, and take over other privileged accounts by overwriting UI_USERNAME and UI_PASSWORD environment variables. Fixed in v1.83.0.
AI Analysis
Technical Summary
LiteLLM is an AI Gateway proxy server that calls LLM APIs in OpenAI or native format. Versions before 1.83.0 have an authorization bypass in the /config/update endpoint, which does not enforce admin role checks. Authenticated users can exploit this to change proxy settings and environment variables, register malicious pass-through handlers executing attacker-controlled Python code, read arbitrary server files by manipulating UI_LOGO_PATH, and hijack privileged accounts by overwriting UI_USERNAME and UI_PASSWORD environment variables. The vulnerability is identified as CWE-863 (Incorrect Authorization) and carries a CVSS 4.0 score of 8.7, indicating high severity. The issue is resolved in LiteLLM version 1.83.0.
Potential Impact
An attacker with authenticated access can gain remote code execution on the server, read arbitrary files, and take over privileged accounts by exploiting the lack of authorization on the /config/update endpoint. This compromises the confidentiality, integrity, and availability of the affected system and potentially other connected resources.
Mitigation Recommendations
Upgrade LiteLLM to version 1.83.0 or later, where this authorization issue has been fixed. Since no official patch link or advisory is provided, verify the upgrade from the vendor's official release notes or repository. Until upgraded, restrict access to the /config/update endpoint to trusted administrators only to reduce risk.
CVE-2026-35029: CWE-863: Incorrect Authorization in BerriAI litellm
Description
LiteLLM is a proxy server (AI Gateway) to call LLM APIs in OpenAI (or native) format. Prior to 1.83.0, the /config/update endpoint does not enforce admin role authorization. A user who is already authenticated into the platform can then use this endpoint to modify proxy configuration and environment variables, register custom pass-through endpoint handlers pointing to attacker-controlled Python code, achieving remote code execution, read arbitrary server files by setting UI_LOGO_PATH and fetching via /get_image, and take over other privileged accounts by overwriting UI_USERNAME and UI_PASSWORD environment variables. Fixed in v1.83.0.
AI-Powered Analysis
Machine-generated threat intelligence
Technical Analysis
LiteLLM is an AI Gateway proxy server that calls LLM APIs in OpenAI or native format. Versions before 1.83.0 have an authorization bypass in the /config/update endpoint, which does not enforce admin role checks. Authenticated users can exploit this to change proxy settings and environment variables, register malicious pass-through handlers executing attacker-controlled Python code, read arbitrary server files by manipulating UI_LOGO_PATH, and hijack privileged accounts by overwriting UI_USERNAME and UI_PASSWORD environment variables. The vulnerability is identified as CWE-863 (Incorrect Authorization) and carries a CVSS 4.0 score of 8.7, indicating high severity. The issue is resolved in LiteLLM version 1.83.0.
Potential Impact
An attacker with authenticated access can gain remote code execution on the server, read arbitrary files, and take over privileged accounts by exploiting the lack of authorization on the /config/update endpoint. This compromises the confidentiality, integrity, and availability of the affected system and potentially other connected resources.
Mitigation Recommendations
Upgrade LiteLLM to version 1.83.0 or later, where this authorization issue has been fixed. Since no official patch link or advisory is provided, verify the upgrade from the vendor's official release notes or repository. Until upgraded, restrict access to the /config/update endpoint to trusted administrators only to reduce risk.
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- GitHub_M
- Date Reserved
- 2026-03-31T21:06:06.427Z
- Cvss Version
- 4.0
- State
- PUBLISHED
- Remediation Level
- null
Threat ID: 69d3ea320a160ebd92c9fda1
Added to database: 4/6/2026, 5:15:30 PM
Last enriched: 4/6/2026, 5:30:27 PM
Last updated: 4/7/2026, 3:03:42 AM
Views: 9
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
External Links
Need more coverage?
Upgrade to Pro Console for AI refresh and higher limits.
For incident response and remediation, OffSeq services can help resolve threats faster.
Latest Threats
Check if your credentials are on the dark web
Instant breach scanning across billions of leaked records. Free tier available.