Skip to main content

CVE-2025-48950: CWE-276: Incorrect Default Permissions in 1Panel-dev MaxKB

Medium
VulnerabilityCVE-2025-48950cvecve-2025-48950cwe-276
Published: Tue Jun 03 2025 (06/03/2025, 18:16:09 UTC)
Source: CVE Database V5
Vendor/Project: 1Panel-dev
Product: MaxKB

Description

MaxKB is an open-source AI assistant for enterprise. Prior to version 1.10.8-lts, Sandbox only restricts the execution permissions of binary files in common directories, such as `/bin,/usr/bin`, etc. Therefore, attackers can exploit some files with execution permissions in non blacklisted directories to carry out attacks. Version 1.10.8-lts fixes the issue.

AI-Powered Analysis

AILast updated: 07/11/2025, 06:16:11 UTC

Technical Analysis

CVE-2025-48950 is a vulnerability identified in the open-source enterprise AI assistant MaxKB developed by 1Panel-dev. The flaw is classified under CWE-276, which pertains to incorrect default permissions. Specifically, versions of MaxKB prior to 1.10.8-lts implement a sandboxing mechanism that restricts execution permissions only on binary files located in common system directories such as /bin and /usr/bin. However, this sandbox does not restrict execution permissions on files located in other directories that are not blacklisted. This oversight allows attackers to exploit executable files in these non-blacklisted directories to perform unauthorized actions or attacks. The vulnerability does not require user interaction or authentication, and it can be exploited remotely over the network with low attack complexity. The CVSS 4.0 base score is 5.8 (medium severity), reflecting the moderate impact and ease of exploitation. The vulnerability affects confidentiality, integrity, and availability due to the potential for arbitrary code execution or privilege escalation within the sandbox environment. The issue was addressed in version 1.10.8-lts by extending sandbox restrictions to cover execution permissions more comprehensively across directories. No known exploits are currently reported in the wild, but the vulnerability presents a significant risk if left unpatched, especially in enterprise environments where MaxKB is deployed to assist with AI-driven tasks.

Potential Impact

For European organizations using MaxKB, this vulnerability could lead to unauthorized execution of malicious code within the AI assistant environment, potentially compromising sensitive enterprise data or disrupting AI-driven operations. Given MaxKB's role as an enterprise AI assistant, exploitation could allow attackers to manipulate AI outputs, access confidential information, or pivot to other internal systems. This risk is heightened in sectors with stringent data protection requirements such as finance, healthcare, and critical infrastructure. The medium severity score indicates a moderate but tangible threat that could impact operational continuity and data integrity. Organizations relying on MaxKB for automation or decision support may face degraded trust in AI outputs or operational interruptions. Additionally, the lack of required user interaction or authentication means that attackers could exploit this vulnerability remotely, increasing the threat surface. The absence of known exploits in the wild currently reduces immediate risk but does not eliminate the potential for future attacks, especially as threat actors often target AI-related tools due to their growing adoption.

Mitigation Recommendations

European organizations should promptly upgrade MaxKB to version 1.10.8-lts or later, where the sandbox execution permission restrictions have been properly extended to all relevant directories. Until the upgrade is applied, organizations should implement strict monitoring of execution permissions on non-standard directories within the MaxKB environment and restrict access to these directories to trusted users only. Employing application whitelisting and runtime application self-protection (RASP) mechanisms can help detect and block unauthorized execution attempts. Network segmentation and limiting MaxKB’s network exposure can reduce the risk of remote exploitation. Additionally, organizations should conduct regular audits of file permissions and sandbox configurations to ensure no inadvertent execution permissions are granted outside the intended scope. Incorporating behavioral analytics to detect anomalous AI assistant behavior may also help identify exploitation attempts early. Finally, maintaining up-to-date backups and incident response plans tailored to AI assistant compromise scenarios will improve resilience.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
GitHub_M
Date Reserved
2025-05-28T18:49:07.584Z
Cvss Version
4.0
State
PUBLISHED

Threat ID: 683f3ee7182aa0cae28796b6

Added to database: 6/3/2025, 6:28:55 PM

Last enriched: 7/11/2025, 6:16:11 AM

Last updated: 8/15/2025, 5:37:35 PM

Views: 27

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats