Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

Google Reports State-Backed Hackers Using Gemini AI for Recon and Attack Support

0
Medium
Vulnerability
Published: Thu Feb 12 2026 (02/12/2026, 17:57:00 UTC)
Source: The Hacker News

Description

Google has reported that the North Korea-linked threat actor UNC2970 is leveraging the generative AI model Gemini for reconnaissance and attack support, marking a new phase in AI-assisted cyber espionage. This group uses Gemini to synthesize open-source intelligence (OSINT), profile high-value targets, and craft tailored phishing personas, primarily targeting aerospace, defense, and energy sectors. Multiple other state-backed groups from China and Iran are also weaponizing Gemini to accelerate intelligence gathering, vulnerability analysis, and malware development. Notably, malware like HONESTCUE uses Gemini's API to generate next-stage payloads dynamically, enabling fileless execution and evasion. Additionally, AI-generated phishing kits such as COINBAIT are being deployed for credential harvesting. Google also identified large-scale model extraction attacks against Gemini, threatening intellectual property and AI model confidentiality. This activity blurs the line between legitimate research and malicious reconnaissance, increasing the sophistication and speed of cyberattacks. European organizations in defense, aerospace, and critical infrastructure sectors face heightened risks from these AI-augmented campaigns.

AI-Powered Analysis

AILast updated: 02/12/2026, 18:03:34 UTC

Technical Analysis

Google's Threat Intelligence Group (GTIG) has observed the North Korean threat actor UNC2970 exploiting the generative AI model Gemini to enhance cyber espionage operations. UNC2970 synthesizes OSINT data and profiles targets by gathering detailed information on cybersecurity and defense companies, including technical roles and salary data, to support campaign planning and reconnaissance. This AI-assisted profiling enables the crafting of highly tailored phishing personas, increasing the likelihood of successful initial compromise. UNC2970 is linked to the Lazarus Group cluster and is known for Operation Dream Job, targeting aerospace, defense, and energy sectors under the guise of job offers. Beyond UNC2970, other state-backed groups from China (e.g., APT31, APT41) and Iran (APT42) are weaponizing Gemini for tasks such as vulnerability analysis, exploit development, social engineering, and operational data gathering. Malware like HONESTCUE dynamically generates C# source code via Gemini's API to execute next-stage payloads in memory, avoiding disk artifacts and detection. The AI-generated phishing kit COINBAIT impersonates cryptocurrency exchanges to harvest credentials. Additionally, Google detected model extraction attacks targeting Gemini, where adversaries systematically query the model to replicate its behavior, threatening the confidentiality of proprietary AI models. This trend represents a significant evolution in cyberattack methodologies, leveraging AI to accelerate reconnaissance, automate exploit development, and enhance social engineering, thereby increasing attack speed and sophistication.

Potential Impact

European organizations, especially those in aerospace, defense, energy, and cybersecurity sectors, face increased risk from AI-augmented cyber espionage campaigns. The use of Gemini AI enables threat actors to conduct more precise and efficient reconnaissance, leading to highly targeted phishing and social engineering attacks that are harder to detect and defend against. The dynamic generation of malware payloads in memory complicates traditional detection methods, increasing the likelihood of successful intrusions and data exfiltration. Model extraction attacks threaten the intellectual property of AI models developed or used by European companies, potentially undermining competitive advantage and exposing sensitive AI capabilities. The blurring of lines between legitimate research and malicious reconnaissance may complicate attribution and response efforts. Overall, this threat could lead to significant confidentiality breaches, operational disruptions, and erosion of trust in AI-driven technologies within European critical infrastructure and defense sectors.

Mitigation Recommendations

European organizations should implement advanced threat detection solutions capable of identifying AI-generated phishing attempts and fileless malware execution, including behavioral analytics and memory forensics. Enhancing email security with AI-driven phishing detection and multi-factor authentication can reduce the risk of credential compromise. Organizations should conduct regular threat hunting focused on AI-assisted attack techniques and monitor for unusual API usage patterns that may indicate model extraction attempts. Protecting proprietary AI models requires implementing strict API rate limiting, anomaly detection, and query response monitoring to detect and prevent model extraction. Employee training should emphasize awareness of sophisticated social engineering tactics that leverage AI-generated personas. Collaboration with intelligence-sharing groups and government agencies can provide timely threat intelligence on evolving AI-enabled threats. Finally, adopting a zero-trust security model and segmenting critical networks can limit lateral movement post-compromise.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Article Source
{"url":"https://thehackernews.com/2026/02/google-reports-state-backed-hackers.html","fetched":true,"fetchedAt":"2026-02-12T18:03:19.739Z","wordCount":1386}

Threat ID: 698e15e7c9e1ff5ad8fa95d6

Added to database: 2/12/2026, 6:03:19 PM

Last enriched: 2/12/2026, 6:03:34 PM

Last updated: 2/12/2026, 8:13:08 PM

Views: 22

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats