OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks
OpenAI on Tuesday said it disrupted three activity clusters for misusing its ChatGPT artificial intelligence (AI) tool to facilitate malware development. This includes a Russian‑language threat actor, who is said to have used the chatbot to help develop and refine a remote access trojan (RAT), a credential stealer with an aim to evade detection. The operator also used several ChatGPT accounts to
AI Analysis
Technical Summary
OpenAI identified and disrupted three primary clusters of malicious cyber activity involving the misuse of its ChatGPT AI tool by threat actors from Russia, North Korea, and China. The Russian-language group used ChatGPT to develop and refine a remote access trojan (RAT) and credential stealer designed to evade detection. This actor used multiple ChatGPT accounts to prototype and debug code components for obfuscation, clipboard monitoring, and data exfiltration via Telegram bots. The North Korean cluster, linked to campaigns targeting diplomatic missions in South Korea, used ChatGPT to develop malware including macOS Finder extensions, configure Windows Server VPNs, and convert browser extensions to facilitate command-and-control (C2) operations. They also leveraged the AI to draft phishing emails and explore advanced malware techniques such as DLL loading and Windows API hooking. The Chinese cluster, associated with the UNK_DropPitch group, targeted investment firms in the Taiwanese semiconductor industry using phishing campaigns and a backdoor called HealthKick (GOVERSHELL). They used ChatGPT to generate phishing content in multiple languages and to accelerate tooling for remote execution and traffic protection. Beyond malware development, other networks from Cambodia, Myanmar, and Nigeria exploited ChatGPT for online scams and influence operations, including social media content generation and translation. Chinese-linked accounts also used ChatGPT for surveillance-related research and influence campaigns targeting political figures in Southeast Asia. OpenAI noted that while its models refused direct malicious content requests, threat actors circumvented restrictions by assembling building-block code snippets. This demonstrates how AI tools can incrementally enhance attacker efficiency and sophistication. The report underscores the need for AI safety research and auditing tools to detect and mitigate AI-assisted cyber threats.
Potential Impact
European organizations face significant risks from this threat due to the advanced malware and phishing campaigns facilitated by AI tools like ChatGPT. The Russian cluster’s development of stealthy RATs and credential stealers threatens confidentiality and integrity of sensitive data, particularly in sectors reliant on Windows platforms. The North Korean group’s targeting of diplomatic missions and use of spear-phishing techniques could extend to European diplomatic and governmental entities, risking espionage and data breaches. The Chinese cluster’s focus on the semiconductor industry aligns with Europe’s growing semiconductor manufacturing and technology sectors, potentially impacting intellectual property and critical supply chains. Additionally, the use of AI to craft sophisticated phishing campaigns increases the likelihood of successful social engineering attacks against European enterprises. The abuse of AI for influence operations and scams also poses reputational and operational risks, especially in countries with active political discourse and social media engagement. Overall, the threat elevates the complexity and scale of cyberattacks, requiring European organizations to adapt defenses to AI-augmented adversaries.
Mitigation Recommendations
European organizations should implement advanced behavioral detection systems capable of identifying AI-assisted malware development patterns, such as iterative code refinement and obfuscation techniques. Deploy endpoint detection and response (EDR) solutions with heuristics tuned to detect clipboard monitoring, unusual Telegram bot traffic, and stealthy RAT behaviors. Enhance phishing defenses by integrating AI-driven email filtering that recognizes AI-generated phishing content and language patterns across multiple languages. Conduct targeted user awareness training focused on recognizing sophisticated phishing and social engineering tactics that leverage AI-generated content. Monitor network traffic for anomalous VPN configurations and suspicious browser extension activities, especially those involving cross-platform conversions. Collaborate with threat intelligence sharing platforms to stay updated on emerging AI-assisted threat actor tactics and indicators of compromise (IOCs). Employ AI auditing tools like Anthropic’s Petri to evaluate internal AI system vulnerabilities and prevent misuse. Finally, enforce strict access controls and multi-factor authentication to limit the impact of credential theft and post-exploitation activities.
Affected Countries
Germany, France, United Kingdom, Netherlands, Italy, Belgium, Sweden
OpenAI Disrupts Russian, North Korean, and Chinese Hackers Misusing ChatGPT for Cyberattacks
Description
OpenAI on Tuesday said it disrupted three activity clusters for misusing its ChatGPT artificial intelligence (AI) tool to facilitate malware development. This includes a Russian‑language threat actor, who is said to have used the chatbot to help develop and refine a remote access trojan (RAT), a credential stealer with an aim to evade detection. The operator also used several ChatGPT accounts to
AI-Powered Analysis
Technical Analysis
OpenAI identified and disrupted three primary clusters of malicious cyber activity involving the misuse of its ChatGPT AI tool by threat actors from Russia, North Korea, and China. The Russian-language group used ChatGPT to develop and refine a remote access trojan (RAT) and credential stealer designed to evade detection. This actor used multiple ChatGPT accounts to prototype and debug code components for obfuscation, clipboard monitoring, and data exfiltration via Telegram bots. The North Korean cluster, linked to campaigns targeting diplomatic missions in South Korea, used ChatGPT to develop malware including macOS Finder extensions, configure Windows Server VPNs, and convert browser extensions to facilitate command-and-control (C2) operations. They also leveraged the AI to draft phishing emails and explore advanced malware techniques such as DLL loading and Windows API hooking. The Chinese cluster, associated with the UNK_DropPitch group, targeted investment firms in the Taiwanese semiconductor industry using phishing campaigns and a backdoor called HealthKick (GOVERSHELL). They used ChatGPT to generate phishing content in multiple languages and to accelerate tooling for remote execution and traffic protection. Beyond malware development, other networks from Cambodia, Myanmar, and Nigeria exploited ChatGPT for online scams and influence operations, including social media content generation and translation. Chinese-linked accounts also used ChatGPT for surveillance-related research and influence campaigns targeting political figures in Southeast Asia. OpenAI noted that while its models refused direct malicious content requests, threat actors circumvented restrictions by assembling building-block code snippets. This demonstrates how AI tools can incrementally enhance attacker efficiency and sophistication. The report underscores the need for AI safety research and auditing tools to detect and mitigate AI-assisted cyber threats.
Potential Impact
European organizations face significant risks from this threat due to the advanced malware and phishing campaigns facilitated by AI tools like ChatGPT. The Russian cluster’s development of stealthy RATs and credential stealers threatens confidentiality and integrity of sensitive data, particularly in sectors reliant on Windows platforms. The North Korean group’s targeting of diplomatic missions and use of spear-phishing techniques could extend to European diplomatic and governmental entities, risking espionage and data breaches. The Chinese cluster’s focus on the semiconductor industry aligns with Europe’s growing semiconductor manufacturing and technology sectors, potentially impacting intellectual property and critical supply chains. Additionally, the use of AI to craft sophisticated phishing campaigns increases the likelihood of successful social engineering attacks against European enterprises. The abuse of AI for influence operations and scams also poses reputational and operational risks, especially in countries with active political discourse and social media engagement. Overall, the threat elevates the complexity and scale of cyberattacks, requiring European organizations to adapt defenses to AI-augmented adversaries.
Mitigation Recommendations
European organizations should implement advanced behavioral detection systems capable of identifying AI-assisted malware development patterns, such as iterative code refinement and obfuscation techniques. Deploy endpoint detection and response (EDR) solutions with heuristics tuned to detect clipboard monitoring, unusual Telegram bot traffic, and stealthy RAT behaviors. Enhance phishing defenses by integrating AI-driven email filtering that recognizes AI-generated phishing content and language patterns across multiple languages. Conduct targeted user awareness training focused on recognizing sophisticated phishing and social engineering tactics that leverage AI-generated content. Monitor network traffic for anomalous VPN configurations and suspicious browser extension activities, especially those involving cross-platform conversions. Collaborate with threat intelligence sharing platforms to stay updated on emerging AI-assisted threat actor tactics and indicators of compromise (IOCs). Employ AI auditing tools like Anthropic’s Petri to evaluate internal AI system vulnerabilities and prevent misuse. Finally, enforce strict access controls and multi-factor authentication to limit the impact of credential theft and post-exploitation activities.
Affected Countries
For access to advanced analysis and higher rate limits, contact root@offseq.com
Technical Details
- Article Source
- {"url":"https://thehackernews.com/2025/10/openai-disrupts-russian-north-korean.html","fetched":true,"fetchedAt":"2025-10-08T07:29:41.306Z","wordCount":1698}
Threat ID: 68e612e5460aa5f0575866f6
Added to database: 10/8/2025, 7:29:41 AM
Last enriched: 10/8/2025, 7:29:58 AM
Last updated: 10/9/2025, 2:11:35 PM
Views: 24
Community Reviews
0 reviewsCrowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.
Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.
Related Threats
Your Shipment Notification Is Now a Malware Dropper
MediumNew Chaos-C++ Ransomware Targets Windows by Wiping Data and Stealing Crypto
MediumFrom Phishing to Malware: AI Becomes Russia's New Cyber Weapon in War on Ukraine
MediumFake Teams Installers Dropping Oyster Backdoor (aka Broomstick) in New Malvertising Scam
MediumChina-Nexus Actors Weaponize 'Nezha' Open Source Tool
MediumActions
Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.
External Links
Need enhanced features?
Contact root@offseq.com for Pro access with improved analysis and higher rate limits.