Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2024-10954: CWE-94 Improper Control of Generation of Code in binary-husky binary-husky/gpt_academic

0
High
VulnerabilityCVE-2024-10954cvecve-2024-10954cwe-94
Published: Thu Mar 20 2025 (03/20/2025, 10:10:46 UTC)
Source: CVE Database V5
Vendor/Project: binary-husky
Product: binary-husky/gpt_academic

Description

In the `manim` plugin of binary-husky/gpt_academic, versions prior to the fix, a vulnerability exists due to improper handling of user-provided prompts. The root cause is the execution of untrusted code generated by the LLM without a proper sandbox. This allows an attacker to perform remote code execution (RCE) on the app backend server by injecting malicious code through the prompt.

AI-Powered Analysis

AILast updated: 10/15/2025, 13:19:58 UTC

Technical Analysis

CVE-2024-10954 is a vulnerability classified under CWE-94 (Improper Control of Generation of Code) found in the manim plugin of the binary-husky/gpt_academic project. The core issue stems from the plugin executing code generated by a large language model (LLM) based on user-provided prompts without adequate sandboxing or validation. This design flaw allows an attacker to craft malicious prompts that result in arbitrary code execution on the backend server hosting the application. The vulnerability affects unspecified versions prior to the fix and was published on March 20, 2025. The CVSS v3.0 score is 8.8, indicating high severity, with attack vector network (AV:N), low attack complexity (AC:L), requiring privileges (PR:L), no user interaction (UI:N), unchanged scope (S:U), and high impact on confidentiality, integrity, and availability (C:H/I:H/A:H). The exploitation does not require user interaction but does require some level of privilege on the system, which might be a user or service account with access to the plugin. No known exploits have been reported in the wild yet. The vulnerability highlights the risks of executing dynamically generated code from LLMs without proper security controls such as sandboxing, input validation, or code signing. This issue is particularly relevant as AI-powered code generation becomes more integrated into software systems. The lack of patch links suggests that a fix may be pending or recently released, emphasizing the need for immediate attention from users of this software.

Potential Impact

For European organizations, the impact of CVE-2024-10954 can be substantial. Successful exploitation leads to remote code execution on backend servers, potentially allowing attackers to fully compromise affected systems. This can result in data breaches, unauthorized access to sensitive information, disruption of services, and lateral movement within networks. Organizations relying on binary-husky/gpt_academic or similar LLM-based code execution frameworks in research, education, or AI development environments are particularly vulnerable. The confidentiality of proprietary AI models, academic research data, and user information could be at risk. Integrity of computational results and availability of services may also be severely impacted, causing operational downtime and reputational damage. Given the increasing adoption of AI tools in European tech sectors, the threat surface is expanding. Additionally, regulatory frameworks such as GDPR impose strict data protection requirements, and a breach exploiting this vulnerability could lead to significant legal and financial penalties.

Mitigation Recommendations

1. Apply official patches or updates from the binary-husky project as soon as they become available to address this vulnerability. 2. Until patches are deployed, disable or restrict access to the manim plugin or any LLM-based code execution features in production environments. 3. Implement strict input validation and sanitization on all user-provided prompts to prevent injection of malicious code. 4. Employ sandboxing techniques such as containerization or virtual machines to isolate code execution environments, preventing malicious code from affecting the host system. 5. Use runtime monitoring and anomaly detection to identify unusual prompt patterns or execution behaviors indicative of exploitation attempts. 6. Enforce the principle of least privilege for accounts running the plugin to limit the potential impact of a compromise. 7. Conduct security audits and code reviews focusing on AI-generated code execution components. 8. Educate developers and administrators about the risks of executing untrusted code generated by AI models and best practices for secure integration.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
@huntr_ai
Date Reserved
2024-11-06T21:38:38.201Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 68ef9b23178f764e1f470a6c

Added to database: 10/15/2025, 1:01:23 PM

Last enriched: 10/15/2025, 1:19:58 PM

Last updated: 10/16/2025, 11:43:50 AM

Views: 1

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats