Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

Google Reports State-Backed Hackers Using Gemini AI for Recon and Attack Support

0
Medium
Vulnerability
Published: Thu Feb 12 2026 (02/12/2026, 17:57:00 UTC)
Source: The Hacker News

Description

Google on Thursday said it observed the North Korea-linked threat actor known as UNC2970 using its generative artificial intelligence (AI) model Gemini to conduct reconnaissance on its targets, as various hacking groups continue to weaponize the tool for accelerating various phases of the cyber attack life cycle, enabling information operations, and even conducting model extraction attacks. "The

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 02/12/2026, 18:03:34 UTC

Technical Analysis

Google's Threat Intelligence Group (GTIG) has observed the North Korean threat actor UNC2970 exploiting the generative AI model Gemini to enhance cyber espionage operations. UNC2970 synthesizes OSINT data and profiles targets by gathering detailed information on cybersecurity and defense companies, including technical roles and salary data, to support campaign planning and reconnaissance. This AI-assisted profiling enables the crafting of highly tailored phishing personas, increasing the likelihood of successful initial compromise. UNC2970 is linked to the Lazarus Group cluster and is known for Operation Dream Job, targeting aerospace, defense, and energy sectors under the guise of job offers. Beyond UNC2970, other state-backed groups from China (e.g., APT31, APT41) and Iran (APT42) are weaponizing Gemini for tasks such as vulnerability analysis, exploit development, social engineering, and operational data gathering. Malware like HONESTCUE dynamically generates C# source code via Gemini's API to execute next-stage payloads in memory, avoiding disk artifacts and detection. The AI-generated phishing kit COINBAIT impersonates cryptocurrency exchanges to harvest credentials. Additionally, Google detected model extraction attacks targeting Gemini, where adversaries systematically query the model to replicate its behavior, threatening the confidentiality of proprietary AI models. This trend represents a significant evolution in cyberattack methodologies, leveraging AI to accelerate reconnaissance, automate exploit development, and enhance social engineering, thereby increasing attack speed and sophistication.

Potential Impact

European organizations, especially those in aerospace, defense, energy, and cybersecurity sectors, face increased risk from AI-augmented cyber espionage campaigns. The use of Gemini AI enables threat actors to conduct more precise and efficient reconnaissance, leading to highly targeted phishing and social engineering attacks that are harder to detect and defend against. The dynamic generation of malware payloads in memory complicates traditional detection methods, increasing the likelihood of successful intrusions and data exfiltration. Model extraction attacks threaten the intellectual property of AI models developed or used by European companies, potentially undermining competitive advantage and exposing sensitive AI capabilities. The blurring of lines between legitimate research and malicious reconnaissance may complicate attribution and response efforts. Overall, this threat could lead to significant confidentiality breaches, operational disruptions, and erosion of trust in AI-driven technologies within European critical infrastructure and defense sectors.

Mitigation Recommendations

European organizations should implement advanced threat detection solutions capable of identifying AI-generated phishing attempts and fileless malware execution, including behavioral analytics and memory forensics. Enhancing email security with AI-driven phishing detection and multi-factor authentication can reduce the risk of credential compromise. Organizations should conduct regular threat hunting focused on AI-assisted attack techniques and monitor for unusual API usage patterns that may indicate model extraction attempts. Protecting proprietary AI models requires implementing strict API rate limiting, anomaly detection, and query response monitoring to detect and prevent model extraction. Employee training should emphasize awareness of sophisticated social engineering tactics that leverage AI-generated personas. Collaboration with intelligence-sharing groups and government agencies can provide timely threat intelligence on evolving AI-enabled threats. Finally, adopting a zero-trust security model and segmenting critical networks can limit lateral movement post-compromise.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Article Source
{"url":"https://thehackernews.com/2026/02/google-reports-state-backed-hackers.html","fetched":true,"fetchedAt":"2026-02-12T18:03:19.739Z","wordCount":1386}

Threat ID: 698e15e7c9e1ff5ad8fa95d6

Added to database: 2/12/2026, 6:03:19 PM

Last enriched: 2/12/2026, 6:03:34 PM

Last updated: 3/30/2026, 2:05:06 AM

Views: 185

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses