Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

Attackers Could Exploit AI Vision Models Using Imperceptible Image Changes

0
Medium
Exploitrce
Published: Thu May 07 2026 (05/07/2026, 13:45:53 UTC)
Source: SecurityWeek

Description

Cisco’s AI security researchers have analyzed ways to target vision-language models (VLMs) using pixel-level perturbation. The post Attackers Could Exploit AI Vision Models Using Imperceptible Image Changes appeared first on SecurityWeek .

AI-Powered Analysis

Machine-generated threat intelligence

AILast updated: 05/07/2026, 13:51:32 UTC

Technical Analysis

Cisco’s AI Threat Intelligence and Security Research team analyzed attacks on vision-language models (VLMs) using pixel-level perturbations to embed malicious instructions in images. These perturbations are imperceptible to humans and OCR tools but can make hidden instructions legible to AI models or reduce their safety refusals. The research applied optimizations against open embedding models and transferred results to proprietary systems such as GPT-4o and Claude. Claude showed a 28% increase in attack success on heavily blurred images after perturbation, while GPT-4o maintained stronger safety alignment. This demonstrates that attackers can craft images that cause AI agents to execute hidden commands, bypassing current safety and filtering mechanisms.

Potential Impact

The impact involves potential manipulation of AI vision-language models to execute hidden malicious instructions embedded in images, which humans cannot detect. This could lead to AI agents performing unauthorized actions such as data exfiltration or ignoring prior safety instructions. However, the research indicates that some models have safety filters that limit attack success, and no known exploits in the wild have been reported. The threat is primarily to AI systems that interpret images and act autonomously based on their content.

Mitigation Recommendations

Patch status is not yet confirmed — check the vendor advisory for current remediation guidance. Cisco’s research highlights the need for more robust defenses in the AI model representation space to detect and mitigate such pixel-level perturbation attacks. Organizations using VLMs should monitor vendor advisories for updates and improvements in safety filters and model robustness. No specific patches or fixes are currently documented. Until official mitigations are available, caution is advised when deploying AI systems that interpret untrusted images.

Pro Console: star threats, build custom feeds, automate alerts via Slack, email & webhooks.Upgrade to Pro

Technical Details

Article Source
{"url":"https://www.securityweek.com/attackers-could-exploit-ai-vision-models-using-imperceptible-image-changes/","fetched":true,"fetchedAt":"2026-05-07T13:51:22.837Z","wordCount":1181}

Threat ID: 69fc98dacbff5d8610f706e6

Added to database: 5/7/2026, 1:51:22 PM

Last enriched: 5/7/2026, 1:51:32 PM

Last updated: 5/8/2026, 9:27:33 PM

Views: 20

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats

Breach by OffSeqOFFSEQFRIENDS — 25% OFF

Check if your credentials are on the dark web

Instant breach scanning across billions of leaked records. Free tier available.

Scan now
OffSeq TrainingCredly Certified

Lead Pen Test Professional

Technical5-day eLearningPECB Accredited
View courses