Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CVE-2024-48142: n/a

0
High
VulnerabilityCVE-2024-48142cvecve-2024-48142
Published: Thu Oct 24 2024 (10/24/2024, 00:00:00 UTC)
Source: CVE Database V5

Description

A prompt injection vulnerability in the chatbox of Butterfly Effect Limited Monica ChatGPT AI Assistant v2.4.0 allows attackers to access and exfiltrate all previous and subsequent chat data between the user and the AI assistant via a crafted message.

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 02/26/2026, 08:52:35 UTC

Technical Analysis

CVE-2024-48142 is a prompt injection vulnerability identified in Butterfly Effect Limited's Monica ChatGPT AI Assistant version 2.4.0. The vulnerability arises from insufficient input validation and sanitization within the chatbox interface, allowing an attacker to inject crafted messages that manipulate the AI assistant's processing logic. This manipulation enables the attacker to access and exfiltrate all chat data, including both previous conversations and any future messages exchanged with the AI assistant. The vulnerability is categorized under CWE-77, indicating command injection issues. The attack vector is network-based (AV:N), requiring no privileges (PR:N) or user interaction (UI:N), making exploitation straightforward for remote attackers. The scope is unchanged (S:U), meaning the vulnerability affects only the vulnerable component without impacting other system components. The impact is high on confidentiality (C:H) but does not affect integrity (I:N) or availability (A:N). No patches have been published yet, and no known exploits are reported in the wild. The vulnerability was reserved on October 8, 2024, and published on October 24, 2024, indicating recent discovery. The lack of affected version specifics suggests all instances of v2.4.0 or similar builds may be vulnerable. This vulnerability poses a significant risk to data privacy and confidentiality in environments using this AI assistant for sensitive communications.

Potential Impact

The primary impact of CVE-2024-48142 is the unauthorized disclosure of sensitive chat data, which can include confidential business information, personally identifiable information (PII), or proprietary communication. Organizations relying on Monica ChatGPT AI Assistant for customer support, internal collaboration, or decision-making processes could face data breaches leading to reputational damage, regulatory penalties (e.g., GDPR, HIPAA), and loss of customer trust. Since the vulnerability allows access to both historical and future chat data, attackers could continuously monitor ongoing communications, potentially facilitating further attacks such as social engineering or corporate espionage. The ease of exploitation without authentication or user interaction increases the risk of widespread abuse. Although availability and integrity are not directly impacted, the confidentiality breach alone is significant, especially in sectors handling sensitive data such as finance, healthcare, and government. The absence of known exploits in the wild currently limits immediate risk but does not preclude future exploitation once the vulnerability becomes widely known.

Mitigation Recommendations

1. Immediate mitigation should focus on restricting network access to the Monica ChatGPT AI Assistant chatbox interface, limiting it to trusted internal networks or VPNs to reduce exposure. 2. Implement input validation and sanitization at the application layer to detect and block malicious prompt injection patterns, specifically targeting command injection vectors (CWE-77). 3. Monitor and audit chat logs for unusual patterns or unexpected data access attempts that may indicate exploitation attempts. 4. Employ Web Application Firewalls (WAFs) with custom rules designed to detect and block injection payloads targeting the chatbox. 5. Coordinate with Butterfly Effect Limited for timely patch releases and apply updates as soon as they become available. 6. Educate users and administrators about the risks of prompt injection and encourage cautious handling of AI assistant interactions. 7. Consider deploying AI assistant instances in isolated environments with strict data access controls to minimize potential data leakage. 8. Conduct penetration testing and security assessments focused on AI assistant interfaces to identify and remediate similar vulnerabilities proactively.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Data Version
5.1
Assigner Short Name
mitre
Date Reserved
2024-10-08T00:00:00.000Z
Cvss Version
3.1
State
PUBLISHED

Threat ID: 699f6d0db7ef31ef0b56d7a4

Added to database: 2/25/2026, 9:43:41 PM

Last enriched: 2/26/2026, 8:52:35 AM

Last updated: 4/12/2026, 9:10:29 AM

Views: 13

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses