Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2024-10950: CWE-94 Improper Control of Generation of Code in binary-husky binary-husky/gpt_academic

0
High
VulnerabilityCVE-2024-10950cvecve-2024-10950cwe-94
Published: Thu Mar 20 2025 (03/20/2025, 10:10:36 UTC)
Source: CVE Database V5
Vendor/Project: binary-husky
Product: binary-husky/gpt_academic

Description

In binary-husky/gpt_academic version <= 3.83, the plugin `CodeInterpreter` is vulnerable to code injection caused by prompt injection. The root cause is the execution of user-provided prompts that generate untrusted code without a sandbox, allowing the execution of parts of the LLM-generated code. This vulnerability can be exploited by an attacker to achieve remote code execution (RCE) on the application backend server, potentially gaining full control of the server.

AI-Powered Analysis

AILast updated: 10/15/2025, 13:19:38 UTC

Technical Analysis

CVE-2024-10950 affects the binary-husky/gpt_academic software, specifically versions up to 3.83, through a vulnerability in its CodeInterpreter plugin. The root cause is improper control over code generation (CWE-94), where user-supplied prompts are executed directly as code without sandboxing or sufficient validation. This prompt injection vulnerability allows an attacker to craft malicious input that the system interprets and executes on the backend server, resulting in remote code execution (RCE). The vulnerability is critical because it compromises confidentiality, integrity, and availability of the affected system. The CVSS score of 8.8 reflects its high severity, with network attack vector, low attack complexity, requiring privileges but no user interaction, and impacting all security properties. The lack of sandboxing means that any code generated by the LLM in response to user prompts can be executed, making it possible for attackers to run arbitrary commands or scripts. Although no known exploits have been reported in the wild, the vulnerability poses a significant risk to organizations relying on this software for academic or research purposes, where sensitive data and computational resources are involved. The absence of patches or official mitigations at the time of publication increases the urgency for organizations to implement compensating controls.

Potential Impact

For European organizations, this vulnerability could lead to severe consequences including unauthorized access to sensitive research data, disruption of academic services, and potential lateral movement within networks. The ability to execute arbitrary code remotely on backend servers can result in data breaches, loss of intellectual property, and damage to organizational reputation. Given the increasing use of AI and LLM-based tools in European academic and research institutions, exploitation could impact critical research projects and collaborations. Additionally, compromised servers could be leveraged to launch further attacks or serve as a foothold for espionage or sabotage. The high severity and ease of exploitation underscore the threat to confidentiality, integrity, and availability of affected systems.

Mitigation Recommendations

Organizations should immediately assess their use of binary-husky/gpt_academic and disable the CodeInterpreter plugin if possible. Implement strict input validation and sanitization to prevent malicious prompt injection. Employ sandboxing or containerization techniques to isolate code execution environments and prevent unauthorized system access. Monitor logs and network traffic for unusual activity indicative of exploitation attempts. Apply any vendor patches or updates as soon as they become available. Restrict access to the application backend to trusted users and networks, and enforce the principle of least privilege for accounts interacting with the vulnerable component. Consider deploying web application firewalls (WAFs) with custom rules to detect and block suspicious input patterns. Regularly review and update incident response plans to include scenarios involving LLM-based code execution vulnerabilities.

Need more detailed analysis?Get Pro

Technical Details

Data Version
5.1
Assigner Short Name
@huntr_ai
Date Reserved
2024-11-06T20:44:10.220Z
Cvss Version
3.0
State
PUBLISHED

Threat ID: 68ef9b23178f764e1f470a69

Added to database: 10/15/2025, 1:01:23 PM

Last enriched: 10/15/2025, 1:19:38 PM

Last updated: 10/16/2025, 1:45:27 PM

Views: 1

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats