Skip to main content
DashboardThreatsMapFeedsAPI
reconnecting
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

CometJacking: Prompt Injection Attack Turns Perplexity’s Comet AI Browser Into a Data Exfiltration Vector

0
High
Published: Sun Oct 05 2025 (10/05/2025, 21:48:37 UTC)
Source: Community Curated

Description

Researchers disclosed 'CometJacking,' a prompt injection attack targeting Perplexity's Comet AI browser that uses a malicious URL to hijack the AI assistant and exfiltrate sensitive data such as emails and calendar entries. The attack bypasses existing data protections using simple Base64 encoding and leverages the browser's authorized access to connected services, highlighting new security risks in AI-native browsers.

AI-Powered Analysis

AILast updated: 10/05/2025, 21:48:37 UTC

Technical Analysis

Researchers disclosed 'CometJacking,' a prompt injection attack targeting Perplexity's Comet AI browser that uses a malicious URL to hijack the AI assistant and exfiltrate sensitive data such as emails and calendar entries. The attack bypasses existing data protections using simple Base64 encoding and leverages the browser's authorized access to connected services, highlighting new security risks in AI-native browsers.

Potential Impact

The article provides detailed, original research on a novel attack vector against AI-native browsers, including technical details and mitigation insights, making it highly relevant and valuable for cybersecurity defenders.

Mitigation Recommendations

Defenders should evaluate and implement controls to detect and block malicious agent prompts in AI browsers, monitor for suspicious URL-based prompt injections, and enforce security-by-design principles for AI assistant memory and prompt handling.

Need more detailed analysis?Get Pro

Required Action

Defenders should evaluate and implement controls to detect and block malicious agent prompts in AI browsers, monitor for suspicious URL-based prompt injections, and enforce security-by-design principles for AI assistant memory and prompt handling.

Technical Details

Community Item Id
68e2e7b547833ad03504d7fe
Community Submitter Notes
null

Threat ID: 68e2e7b547833ad03504d801

Added to database: 10/5/2025, 9:48:37 PM

Last enriched: 10/5/2025, 9:48:37 PM

Last updated: 10/6/2025, 2:33:39 AM

Views: 15

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats