Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2025-63390: n/a

0
Unknown
VulnerabilityCVE-2025-63390cvecve-2025-63390
Published: Thu Dec 18 2025 (12/18/2025, 00:00:00 UTC)
Source: CVE Database V5

Description

An authentication bypass vulnerability exists in AnythingLLM v1.8.5 in via the /api/workspaces endpoint. The endpoint fails to implement proper authentication checks, allowing unauthenticated remote attackers to enumerate and retrieve detailed information about all configured workspaces. Exposed data includes: workspace identifiers (id, name, slug), AI model configurations (chatProvider, chatModel, agentProvider), system prompts (openAiPrompt), operational parameters (temperature, history length, similarity thresholds), vector search settings, chat modes, and timestamps.

AI-Powered Analysis

AILast updated: 12/18/2025, 16:11:48 UTC

Technical Analysis

CVE-2025-63390 is a critical authentication bypass vulnerability identified in AnythingLLM version 1.8.5, specifically targeting the /api/workspaces endpoint. The vulnerability arises because the endpoint does not enforce any authentication or authorization checks, allowing unauthenticated remote attackers to access detailed information about all configured workspaces. The exposed data includes workspace identifiers such as id, name, and slug; AI model configurations including chatProvider, chatModel, and agentProvider; system prompts like openAiPrompt; operational parameters such as temperature, history length, and similarity thresholds; vector search settings; chat modes; and timestamps. This information leakage can provide attackers with deep insight into the AI deployment environment, enabling them to tailor subsequent attacks or exploit other vulnerabilities. Although no public exploits have been reported yet, the vulnerability's nature makes it straightforward to exploit remotely without user interaction or credentials. The absence of a CVSS score necessitates a severity assessment based on impact and exploitability. The vulnerability compromises confidentiality by exposing sensitive configuration data, potentially affecting integrity and availability if leveraged in chained attacks. The scope is broad as it affects all instances of AnythingLLM v1.8.5 exposing the vulnerable endpoint. The vulnerability is particularly concerning for organizations relying on AnythingLLM for AI-driven operations, as attackers can gain intelligence on AI model configurations and operational parameters, which may be proprietary or sensitive. Immediate remediation involves implementing strict authentication and authorization controls on the /api/workspaces endpoint, auditing access logs for suspicious activity, and applying patches or updates once available. Organizations should also review exposed data for potential leakage and consider rotating any credentials or keys that may be indirectly exposed. Given the increasing adoption of AI platforms in Europe, this vulnerability poses a significant risk to confidentiality and operational security.

Potential Impact

The primary impact of CVE-2025-63390 is the unauthorized disclosure of sensitive configuration and operational data related to AI workspaces managed by AnythingLLM. For European organizations, this can lead to several risks: exposure of proprietary AI model configurations and system prompts may compromise intellectual property and competitive advantage; attackers can use the detailed information to craft targeted attacks or social engineering campaigns; knowledge of operational parameters and vector search settings could facilitate manipulation or disruption of AI-driven processes; potential indirect impacts on data integrity and availability if attackers leverage this information to escalate privileges or inject malicious inputs. The vulnerability undermines trust in AI deployments and may have regulatory implications under GDPR if personal or sensitive data is indirectly exposed or if the breach leads to further data compromise. Organizations in sectors such as finance, healthcare, and critical infrastructure that increasingly rely on AI technologies are particularly at risk. The ease of exploitation without authentication increases the likelihood of reconnaissance and subsequent attacks, potentially leading to operational disruptions or data breaches.

Mitigation Recommendations

1. Immediately restrict access to the /api/workspaces endpoint by implementing robust authentication and authorization mechanisms, ensuring only authorized users and systems can query workspace information. 2. Conduct a thorough audit of current access logs to detect any unauthorized or suspicious access attempts to the vulnerable endpoint. 3. Apply any available patches or updates from the vendor addressing this vulnerability as soon as they are released. 4. If patches are not yet available, consider deploying web application firewalls (WAFs) or API gateways to block unauthenticated requests to the affected endpoint. 5. Review and minimize the amount of sensitive information exposed via APIs, following the principle of least privilege and data minimization. 6. Rotate any credentials, API keys, or tokens that may have been exposed or are linked to the compromised configurations. 7. Educate development and operations teams on secure API design and the importance of enforcing authentication and authorization on all endpoints. 8. Monitor threat intelligence feeds for any emerging exploits or attack campaigns leveraging this vulnerability to enable rapid response.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.2
Assigner Short Name
mitre
Date Reserved
2025-10-27T00:00:00.000Z
Cvss Version
null
State
PUBLISHED

Threat ID: 6944242d4eb3efac36964743

Added to database: 12/18/2025, 3:56:29 PM

Last enriched: 12/18/2025, 4:11:48 PM

Last updated: 12/19/2025, 1:18:01 PM

Views: 19

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats