CVE-2025-63681: n/a
open-webui v0.6.33 is vulnerable to Incorrect Access Control. The API /api/tasks/stop/ directly accesses and cancels tasks without verifying user ownership, enabling attackers (a normal user) to stop arbitrary LLM response tasks.
AI Analysis
Technical Summary
CVE-2025-63681 identifies an Incorrect Access Control vulnerability in open-webui version 0.6.33. The vulnerability exists in the API endpoint /api/tasks/stop/, which is designed to allow users to cancel ongoing LLM response tasks. However, the endpoint fails to verify whether the requesting user actually owns the task they are attempting to stop. This lack of ownership verification means that any authenticated user can stop tasks initiated by other users. The vulnerability does not require elevated privileges beyond normal user authentication, nor does it require user interaction beyond making the API call. While this does not directly expose sensitive data or allow privilege escalation, it enables attackers to disrupt legitimate LLM operations by cancelling tasks arbitrarily, effectively causing denial of service to other users. There is no CVSS score assigned yet, and no public exploits have been reported. The vulnerability highlights a common security oversight in API design where access control is not properly enforced on user-specific resources. Open-webui is a platform used to manage and interact with large language models, and such disruptions could impact workflows relying on AI task processing. The flaw can be mitigated by implementing strict access control checks that confirm task ownership before allowing cancellation, ensuring that users can only stop their own tasks.
Potential Impact
For European organizations, the primary impact of this vulnerability is operational disruption. Organizations using open-webui to manage LLM tasks may experience denial of service conditions where users can arbitrarily cancel others' tasks, leading to workflow interruptions and reduced productivity. This could be particularly problematic in environments where LLM tasks are critical for business processes, research, or customer-facing applications. Although no direct data breach or privilege escalation is involved, the integrity and availability of AI-driven services are compromised. This may erode user trust and cause delays in AI model outputs. In sectors like finance, healthcare, or research where AI task continuity is important, such disruptions could have cascading effects. The absence of known exploits reduces immediate risk, but the ease of exploitation by any authenticated user means the vulnerability should be addressed promptly to avoid potential misuse. Additionally, organizations may face compliance scrutiny if service reliability is impacted, especially under regulations emphasizing operational resilience.
Mitigation Recommendations
To mitigate this vulnerability, organizations should: 1) Implement strict access control checks on the /api/tasks/stop/ endpoint to verify that the user requesting task cancellation is the owner of the task. 2) Review and audit all API endpoints managing user-specific resources to ensure proper authorization enforcement. 3) Apply the principle of least privilege by limiting API access tokens or credentials to only necessary scopes. 4) Monitor API usage logs for unusual patterns of task cancellations that could indicate abuse. 5) Update open-webui to a patched version once available or apply custom patches to enforce ownership verification. 6) Educate developers and administrators on secure API design principles to prevent similar issues. 7) Consider implementing rate limiting or anomaly detection on task cancellation requests to reduce potential abuse. These steps go beyond generic advice by focusing on ownership verification and proactive monitoring tailored to the specific API flaw.
Affected Countries
Germany, France, United Kingdom, Netherlands, Sweden, Finland
CVE-2025-63681: n/a
Description
open-webui v0.6.33 is vulnerable to Incorrect Access Control. The API /api/tasks/stop/ directly accesses and cancels tasks without verifying user ownership, enabling attackers (a normal user) to stop arbitrary LLM response tasks.
AI-Powered Analysis
Technical Analysis
CVE-2025-63681 identifies an Incorrect Access Control vulnerability in open-webui version 0.6.33. The vulnerability exists in the API endpoint /api/tasks/stop/, which is designed to allow users to cancel ongoing LLM response tasks. However, the endpoint fails to verify whether the requesting user actually owns the task they are attempting to stop. This lack of ownership verification means that any authenticated user can stop tasks initiated by other users. The vulnerability does not require elevated privileges beyond normal user authentication, nor does it require user interaction beyond making the API call. While this does not directly expose sensitive data or allow privilege escalation, it enables attackers to disrupt legitimate LLM operations by cancelling tasks arbitrarily, effectively causing denial of service to other users. There is no CVSS score assigned yet, and no public exploits have been reported. The vulnerability highlights a common security oversight in API design where access control is not properly enforced on user-specific resources. Open-webui is a platform used to manage and interact with large language models, and such disruptions could impact workflows relying on AI task processing. The flaw can be mitigated by implementing strict access control checks that confirm task ownership before allowing cancellation, ensuring that users can only stop their own tasks.
Potential Impact
For European organizations, the primary impact of this vulnerability is operational disruption. Organizations using open-webui to manage LLM tasks may experience denial of service conditions where users can arbitrarily cancel others' tasks, leading to workflow interruptions and reduced productivity. This could be particularly problematic in environments where LLM tasks are critical for business processes, research, or customer-facing applications. Although no direct data breach or privilege escalation is involved, the integrity and availability of AI-driven services are compromised. This may erode user trust and cause delays in AI model outputs. In sectors like finance, healthcare, or research where AI task continuity is important, such disruptions could have cascading effects. The absence of known exploits reduces immediate risk, but the ease of exploitation by any authenticated user means the vulnerability should be addressed promptly to avoid potential misuse. Additionally, organizations may face compliance scrutiny if service reliability is impacted, especially under regulations emphasizing operational resilience.
Mitigation Recommendations
To mitigate this vulnerability, organizations should: 1) Implement strict access control checks on the /api/tasks/stop/ endpoint to verify that the user requesting task cancellation is the owner of the task. 2) Review and audit all API endpoints managing user-specific resources to ensure proper authorization enforcement. 3) Apply the principle of least privilege by limiting API access tokens or credentials to only necessary scopes. 4) Monitor API usage logs for unusual patterns of task cancellations that could indicate abuse. 5) Update open-webui to a patched version once available or apply custom patches to enforce ownership verification. 6) Educate developers and administrators on secure API design principles to prevent similar issues. 7) Consider implementing rate limiting or anomaly detection on task cancellation requests to reduce potential abuse. These steps go beyond generic advice by focusing on ownership verification and proactive monitoring tailored to the specific API flaw.
Affected Countries
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Data Version
- 5.2
- Assigner Short Name
- mitre
- Date Reserved
- 2025-10-27T00:00:00.000Z
- Cvss Version
- null
- State
- PUBLISHED
Threat ID: 6931a8e704d931fa5b427e8e
Added to database: 12/4/2025, 3:29:43 PM
Last enriched: 12/4/2025, 3:45:33 PM
Last updated: 12/5/2025, 2:36:23 AM
Views: 15
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Actions
Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.