Skip to main content
DashboardThreatsMapFeedsAPI
reconnecting
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

How scammers have mastered AI: deepfakes, fake websites, and phishing emails | Kaspersky official blog

0
Medium
Phishingweb
Published: Fri Sep 26 2025 (09/26/2025, 17:38:23 UTC)
Source: Kaspersky Security Blog

Description

Scammers are increasingly leveraging AI technologies such as deepfakes, AI-generated phishing websites, and automated voice bots to conduct sophisticated phishing and social engineering attacks. These AI-powered scams include creating realistic fake personas for romance scams, generating convincing audio and video deepfakes to impersonate trusted individuals, and deploying AI chatbots that inadvertently assist users in falling for phishing sites. AI tools also enable rapid creation of professional-looking phishing websites using HTTPS and legitimate-looking content, making detection harder. Automated calls mimicking banks or government agencies use AI-generated voices to trick victims into revealing sensitive information. AI-powered browser assistants can be manipulated to visit phishing sites and even submit credentials without user awareness. The threat landscape is evolving rapidly with AI lowering the cost and increasing the scale and sophistication of scams. European organizations and individuals face increased risks from these AI-enhanced social engineering attacks, which can lead to financial loss, credential theft, and reputational damage. Vigilance, user education, and advanced detection technologies are critical to mitigating these threats.

AI-Powered Analysis

AILast updated: 10/07/2025, 01:34:25 UTC

Technical Analysis

This threat involves the use of advanced artificial intelligence technologies by scammers to enhance phishing and social engineering attacks. AI enables the creation of deepfakes—highly realistic synthetic audio and video impersonations—that scammers use to mimic trusted individuals such as family members, celebrities, or company executives. These deepfakes can be deployed in real-time video or audio calls, increasing the likelihood of successful deception. AI also facilitates the rapid generation of convincing phishing websites that use HTTPS, cookie consent banners, and professional designs, making them difficult to distinguish from legitimate sites. Additionally, AI-powered chatbots and browser assistants, which many users rely on for convenience, can be manipulated to visit phishing sites and even submit sensitive information automatically, effectively bypassing user scrutiny. Automated voice bots simulate legitimate customer support calls to extract confidential data. The use of AI reduces the cost and effort required to conduct large-scale, personalized scams such as “pig butchering” romance scams, where scammers build long-term emotional relationships to defraud victims. The threat is compounded by the availability of AI deepfake creation services on the dark web at relatively low prices, making these tools accessible to a wide range of malicious actors. Although some dark web offerings may be scams themselves, the overall trend shows a significant increase in AI-assisted fraud sophistication. The threat does not require technical vulnerabilities in software but exploits human trust and social engineering, making it highly effective and challenging to counter.

Potential Impact

For European organizations, the impact of AI-enhanced phishing and scams can be severe. Financial institutions, government agencies, and large enterprises are prime targets due to the potential for high-value fraud and data breaches. Employees may be tricked into revealing credentials or authorizing fraudulent transactions, leading to direct financial losses and regulatory penalties under GDPR for data breaches. The reputational damage from successful scams can erode customer trust and brand integrity. Small and medium enterprises (SMEs) may suffer disproportionately due to limited cybersecurity resources and awareness. Individuals are also at risk of significant personal financial loss and identity theft, especially from romance scams and deepfake impersonations. The use of AI to create realistic phishing websites and automated calls increases the scale and success rate of attacks, potentially overwhelming existing detection and response capabilities. The threat also complicates incident response, as attackers use multiple channels and personas to evade detection. Overall, the threat increases the attack surface and requires enhanced vigilance and adaptive security measures across Europe.

Mitigation Recommendations

European organizations should implement multi-layered defenses tailored to AI-enhanced social engineering threats. Specific measures include: 1) Conducting targeted employee awareness training focused on recognizing AI-generated deepfakes, phishing websites, and automated scam calls, including practical verification techniques such as out-of-band confirmation and code words for sensitive requests. 2) Deploying advanced email and web filtering solutions that incorporate AI-based detection of phishing URLs and suspicious content, complemented by real-time threat intelligence feeds. 3) Enforcing strong multi-factor authentication (MFA) to reduce the impact of credential theft. 4) Monitoring for anomalous user behavior and transaction patterns indicative of social engineering compromise. 5) Restricting and auditing permissions granted to AI-powered browser assistants and chatbots, limiting their ability to access sensitive data or perform transactions without explicit user consent. 6) Encouraging users to verify unexpected requests for money or sensitive information through independent communication channels. 7) Implementing domain and certificate monitoring to detect fraudulent phishing sites quickly. 8) Collaborating with law enforcement and cybersecurity communities to share intelligence on emerging AI scam tactics. 9) For individuals, promoting skepticism of unsolicited offers, especially those involving urgency or financial transactions, and verifying identities through multiple channels. 10) Regularly updating security policies to address the evolving AI threat landscape and integrating AI threat detection capabilities into security operations centers (SOCs).

Need more detailed analysis?Get Pro

Technical Details

Article Source
{"url":"https://www.kaspersky.com/blog/ai-phishing-and-scams/54445/","fetched":true,"fetchedAt":"2025-10-07T01:33:07.651Z","wordCount":2295}

Threat ID: 68e46dd46a45552f36e95756

Added to database: 10/7/2025, 1:33:08 AM

Last enriched: 10/7/2025, 1:34:25 AM

Last updated: 10/7/2025, 9:01:33 AM

Views: 5

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis are available only with a Pro account. Contact root@offseq.com for access.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats