Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2024-10273: CWE-863 Incorrect Authorization in lunary-ai lunary-ai/lunary

0
Medium
VulnerabilityCVE-2024-10273cvecve-2024-10273cwe-863
Published: Thu Mar 20 2025 (03/20/2025, 10:08:48 UTC)
Source: CVE Database V5
Vendor/Project: lunary-ai
Product: lunary-ai/lunary

Description

In lunary-ai/lunary v1.5.0, improper privilege management in the models.ts file allows users with viewer roles to modify models owned by others. The PATCH endpoint for models does not have appropriate privilege checks, enabling low-privilege users to update models they should not have access to modify. This vulnerability could lead to unauthorized changes in critical resources, affecting the integrity and reliability of the system.

AI-Powered Analysis

AILast updated: 10/15/2025, 13:12:43 UTC

Technical Analysis

CVE-2024-10273 is an authorization vulnerability classified under CWE-863 found in the lunary-ai/lunary project, specifically in version 1.5.0. The issue stems from improper privilege management in the models.ts file, where the PATCH endpoint responsible for updating AI models lacks adequate authorization checks. This allows users assigned only viewer roles—who should have read-only access—to perform modifications on models owned by other users. The vulnerability does not require user interaction and can be exploited remotely over the network with low attack complexity and privileges. The impact is primarily on the integrity of the system, as unauthorized users can alter critical AI models, potentially leading to corrupted or manipulated outputs, undermining trust in the AI system. The CVSS v3.0 score is 6.5, reflecting a medium severity with no impact on confidentiality or availability but high impact on integrity. No public exploits have been reported yet, but the vulnerability poses a risk especially in environments where AI models are critical assets. The lack of patch links suggests that fixes may not yet be widely available, emphasizing the need for immediate mitigation steps by affected organizations.

Potential Impact

For European organizations, the unauthorized modification of AI models can have significant consequences, especially in sectors relying heavily on AI for decision-making, such as finance, healthcare, and manufacturing. Integrity breaches could lead to incorrect model outputs, resulting in flawed business decisions, regulatory non-compliance, or safety risks. Since lunary-ai/lunary is an AI development platform, compromised models might propagate errors or malicious behaviors into production systems. This could damage organizational reputation and trust in AI solutions. The medium severity rating suggests that while the vulnerability is not critical, it is exploitable with relatively low privileges and no user interaction, increasing the risk of insider threats or lateral movement attacks. The absence of known exploits in the wild reduces immediate risk but does not eliminate the threat, especially as attackers may develop exploits once the vulnerability becomes more widely known.

Mitigation Recommendations

Organizations should immediately review and tighten access controls on lunary-ai/lunary deployments, ensuring that viewer roles cannot perform any write or update operations. Implement strict role-based access control (RBAC) policies and validate all API endpoints for proper authorization checks. Conduct thorough code audits focusing on privilege management in the models.ts file and related components. Monitor logs and audit trails for unusual PATCH requests or modifications to models by low-privilege users. If possible, isolate AI model management systems from broader network access to reduce attack surface. Engage with lunary-ai vendor or community to obtain patches or updates addressing this vulnerability. Until patches are available, consider implementing compensating controls such as API gateways or web application firewalls (WAFs) to enforce authorization policies. Educate developers and administrators about the risks of improper authorization and the importance of secure coding practices.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
@huntr_ai
Date Reserved
2024-10-23T05:16:22.182Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 68ef9b22178f764e1f4709da

Added to database: 10/15/2025, 1:01:22 PM

Last enriched: 10/15/2025, 1:12:43 PM

Last updated: 10/16/2025, 2:50:02 PM

Views: 1

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats