Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-66451: CWE-20: Improper Input Validation in danny-avila LibreChat

0
Medium
VulnerabilityCVE-2025-66451cvecve-2025-66451cwe-20cwe-915
Published: Thu Dec 11 2025 (12/11/2025, 22:33:24 UTC)
Source: CVE Database V5
Vendor/Project: danny-avila
Product: LibreChat

Description

LibreChat is a ChatGPT clone with additional features. In versions 0.8.0 and below, when creating prompts, JSON requests are sent to define and modify the prompts via PATCH endpoint for prompt groups (/api/prompts/groups/:groupId). However, the request bodies are not sufficiently validated for proper input, enabling users to modify prompts in a way that was not intended as part of the front end system. The patchPromptGroup function passes req.body directly to updatePromptGroup() without filtering sensitive fields. This issue is fixed in version 0.8.1.

AI-Powered Analysis

AILast updated: 12/11/2025, 22:55:59 UTC

Technical Analysis

LibreChat, an open-source ChatGPT clone developed by danny-avila, versions 0.8.0 and below contain a vulnerability (CVE-2025-66451) due to improper input validation (CWE-20) and insufficient control over input modification (CWE-915). The vulnerability resides in the PATCH endpoint /api/prompts/groups/:groupId, which allows users to send JSON requests to create or modify prompt groups. The backend function patchPromptGroup passes the request body directly to updatePromptGroup() without filtering or validating sensitive fields, enabling attackers to manipulate prompt configurations beyond the intended front-end restrictions. This can lead to unauthorized changes in chatbot behavior, potentially allowing attackers to inject malicious or misleading prompts, degrade chatbot integrity, or bypass usage policies. The vulnerability is remotely exploitable over the network without authentication but requires user interaction (sending crafted requests). The CVSS 4.0 vector indicates low confidentiality and integrity impact, no availability impact, and low complexity of attack. The issue was fixed in LibreChat version 0.8.1 by adding proper input validation and filtering of sensitive fields in the API. No known exploits have been reported in the wild as of the publication date.

Potential Impact

For European organizations deploying LibreChat versions prior to 0.8.1, this vulnerability poses a risk to the integrity of AI-driven chatbot interactions. Attackers could manipulate prompt groups to alter chatbot responses, potentially spreading misinformation, enabling social engineering, or bypassing content restrictions. This could undermine trust in AI services, damage organizational reputation, and expose users to harmful content. While the confidentiality and availability impact is low, the integrity impact is significant, especially for organizations relying on AI chatbots for customer support, internal knowledge bases, or automated decision-making. The lack of authentication requirement increases the attack surface, allowing external threat actors to exploit the vulnerability remotely. Given the growing adoption of AI chatbots in Europe, especially in sectors like finance, healthcare, and public services, the threat could disrupt critical communication channels and lead to compliance issues under regulations such as GDPR if user data or interactions are manipulated.

Mitigation Recommendations

1. Upgrade all LibreChat instances to version 0.8.1 or later immediately to apply the official patch that enforces proper input validation and filtering. 2. Implement strict server-side validation on all API endpoints, especially PATCH requests modifying prompt groups, to reject unexpected or sensitive fields. 3. Enforce authentication and authorization controls on API endpoints to ensure only authorized users can modify prompt configurations. 4. Monitor API logs for unusual PATCH requests or modifications to prompt groups that deviate from normal usage patterns. 5. Conduct regular security audits and code reviews focusing on input validation and access controls in AI chatbot platforms. 6. Educate developers and administrators on secure coding practices related to JSON input handling and API security. 7. Consider deploying Web Application Firewalls (WAFs) with custom rules to detect and block malformed or suspicious API requests targeting prompt modification endpoints. 8. Maintain an incident response plan to quickly address any detected exploitation attempts or suspicious behavior related to chatbot prompt manipulation.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.2
Assigner Short Name
GitHub_M
Date Reserved
2025-12-01T18:44:35.638Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 693b487c22246175c6a6ed55

Added to database: 12/11/2025, 10:41:00 PM

Last enriched: 12/11/2025, 10:55:59 PM

Last updated: 12/11/2025, 11:54:16 PM

Views: 4

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats