Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

AI-powered sextortion: a new threat to privacy | Kaspersky official blog

0
Medium
Vulnerability
Published: Thu Jan 15 2026 (01/15/2026, 15:09:01 UTC)
Source: Kaspersky Security Blog

Description

Ordinary photos from your social media can be turned into tools for AI-driven sextortion and deepfakes. How can you protect your privacy and security?

AI-Powered Analysis

AILast updated: 01/15/2026, 15:19:44 UTC

Technical Analysis

The emergence of generative AI technologies has revolutionized the creation of synthetic media, enabling the rapid production of highly realistic images and videos from simple text prompts or existing photos. This capability has been weaponized in the form of AI-powered sextortion, where attackers generate fake nude or sexualized images of individuals using their publicly available social media photos. Unlike traditional sextortion, which relied on actual intimate content or hacking, AI-driven sextortion can target anyone with an online presence. In 2025, cybersecurity researchers uncovered multiple unsecured, publicly accessible databases containing millions of AI-generated images, many of which were pornographic or sexualized depictions of real individuals, including minors and celebrities. These databases originated from AI services such as MagicEdit and DreamPal, which offered tools for virtual clothing changes, face-swapping, and explicit content generation. The lack of proper security controls on these databases allowed unrestricted access to sensitive synthetic content, exacerbating privacy risks. The threat extends beyond individual victims to organizations, as employees’ reputations and mental health may be impacted, potentially affecting workplace productivity and trust. The rapid addition of thousands of images daily indicates ongoing exploitation. The threat highlights the challenges of regulating AI content generation and protecting personal data in the digital age. Kaspersky recommends minimizing public exposure of personal images, employing privacy-enhancing tools, and educating users on the risks of AI-generated sextortion. Parental controls and monitoring are advised to protect minors from becoming victims. This threat represents a new dimension of privacy invasion and social engineering that organizations and individuals must address proactively.

Potential Impact

For European organizations, AI-powered sextortion poses multifaceted risks. Employee privacy breaches can lead to psychological distress, reduced morale, and potential insider threats if blackmailed individuals are coerced into malicious actions. Organizations with public-facing employees or influencers may suffer reputational damage if synthetic explicit content circulates online. The threat also complicates HR and legal processes, as distinguishing real from AI-generated content becomes challenging. Regulatory compliance under GDPR and other privacy laws may be impacted if organizations fail to protect employee data or respond adequately to sextortion incidents. The widespread availability of AI tools lowers the barrier for attackers, increasing the likelihood of targeted campaigns against high-profile individuals or executives. Furthermore, the normalization of AI-generated deepfakes could erode trust in digital communications and complicate incident response. Organizations involved in AI development or digital marketing must also consider the ethical and security implications of their tools being misused or leaking sensitive synthetic content. Overall, the threat undermines personal and organizational security, necessitating comprehensive privacy and security strategies.

Mitigation Recommendations

1. Enforce strict privacy settings on all corporate and personal social media accounts to limit public access to employee photos and personal data. 2. Implement employee awareness and training programs focused on the risks of AI-generated sextortion and safe online behavior. 3. Deploy monitoring tools to detect leaked or AI-generated synthetic content involving employees or brand assets on the dark web and public platforms. 4. Establish clear incident response protocols for sextortion cases, including legal support and psychological assistance. 5. Encourage use of privacy-enhancing technologies and services like Kaspersky’s Privacy Checker to systematically secure online accounts. 6. For organizations with minors or youth engagement, utilize parental control solutions and educate guardians on protecting children’s digital footprints. 7. Collaborate with AI service providers to advocate for stronger data protection, encryption, and access controls on AI-generated content databases. 8. Regularly audit third-party AI tools used within the organization to assess security and privacy risks. 9. Promote a culture of digital hygiene and skepticism towards unsolicited sextortion attempts leveraging AI deepfakes. 10. Engage with policymakers to support regulations addressing AI misuse and synthetic media protections.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Article Source
{"url":"https://www.kaspersky.com/blog/ai-generated-sextortion-social-media/55137/","fetched":true,"fetchedAt":"2026-01-15T15:19:14.107Z","wordCount":1670}

Threat ID: 696905724c611209ad2b6595

Added to database: 1/15/2026, 3:19:14 PM

Last enriched: 1/15/2026, 3:19:44 PM

Last updated: 1/15/2026, 7:18:52 PM

Views: 7

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats