Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

Microsoft Finds “Summarize with AI” Prompts Manipulating Chatbot Recommendations

0
Medium
Vulnerabilityweb
Published: Tue Feb 17 2026 (02/17/2026, 09:31:00 UTC)
Source: The Hacker News

Description

New research from Microsoft has revealed that legitimate businesses are gaming artificial intelligence (AI) chatbots via the "Summarize with AI" button that's being increasingly placed on websites in ways that mirror classic search engine poisoning (SEO). The new AI hijacking technique has been codenamed AI Recommendation Poisoning by the Microsoft Defender Security Research Team. The tech giant

AI-Powered Analysis

AILast updated: 02/17/2026, 09:55:16 UTC

Technical Analysis

The AI Recommendation Poisoning threat discovered by Microsoft involves the manipulation of AI chatbot recommendation systems through the exploitation of the "Summarize with AI" feature increasingly embedded on websites. Attackers, including legitimate businesses, embed hidden instructions within these buttons or URLs that, when activated by a user, inject persistent commands into the AI assistant's memory. These commands instruct the AI to treat certain companies or domains as trusted or authoritative sources, thereby biasing future recommendations and responses. This technique mirrors traditional search engine poisoning but targets AI memory and recommendation logic instead of search rankings. The attack leverages the AI's inability to differentiate between genuine user preferences and third-party injected instructions, enabling persistent bias across sessions. The manipulation is delivered via specially crafted URLs containing prompt parameters (e.g., query strings) that automatically execute memory-altering commands upon clicking the "Summarize with AI" button. These URLs are also distributed via email, increasing attack vectors. Over 50 unique prompt variants from 31 companies across 14 industries were identified over a 60-day period, highlighting the widespread nature of this manipulation. The attack can lead to skewed AI recommendations in sensitive areas such as health, finance, and security, potentially spreading misinformation, promoting falsehoods, or unfairly disadvantaging competitors. The threat is exacerbated by the availability of turnkey solutions like CiteMET and AI Share Button URL Creator, which simplify embedding manipulative prompts into websites. Detection is challenging because the manipulation is invisible to users and persistent across AI interactions. Microsoft advises users to audit AI assistant memory for suspicious entries, avoid clicking untrusted AI buttons, and for organizations to hunt for URLs containing keywords indicative of memory manipulation. This threat raises significant concerns about AI transparency, neutrality, and trustworthiness.

Potential Impact

For European organizations, AI Recommendation Poisoning poses a multifaceted risk. Businesses relying on AI chatbots for customer engagement, decision support, or content summarization may unknowingly propagate biased or manipulated information, undermining customer trust and brand reputation. In critical sectors such as healthcare, finance, and legal services, skewed AI recommendations could lead to poor decision-making, regulatory non-compliance, or financial loss. The erosion of trust in AI systems may slow AI adoption or prompt costly audits and controls. Furthermore, European organizations could be targeted by competitors embedding manipulative prompts to unfairly boost their visibility or discredit rivals, impacting market fairness. The persistence and invisibility of the manipulation complicate detection and remediation, increasing operational risk. Additionally, the spread of misinformation through AI assistants could have broader societal impacts, affecting public health and safety. Given the EU's strong regulatory environment around AI ethics and transparency (e.g., the AI Act), organizations may face legal and compliance challenges if AI Recommendation Poisoning leads to biased or harmful outputs. Overall, the threat undermines the integrity and reliability of AI-driven services critical to European digital infrastructure and commerce.

Mitigation Recommendations

European organizations should implement layered, AI-specific defenses beyond generic cybersecurity measures. First, integrate AI prompt auditing tools that automatically scan and flag suspicious or manipulative instructions embedded in URLs, documents, or web elements before processing by AI systems. Develop and enforce strict content validation policies for any external inputs feeding AI assistants, including sanitization of URL parameters and prompt content. Educate users and employees to recognize and avoid clicking on untrusted "Summarize with AI" buttons or links, especially those received via email or unfamiliar websites. Implement monitoring and logging of AI assistant memory states and prompt histories to detect anomalies or persistent biased entries. Collaborate with AI vendors to incorporate memory integrity checks and mechanisms to distinguish genuine user preferences from injected commands, possibly through cryptographic prompt validation or sandboxed prompt execution. Regularly audit AI-generated recommendations for bias or manipulation, particularly in sensitive domains. Establish incident response procedures specific to AI manipulation incidents, including rapid revocation or quarantine of compromised AI memory states. Engage with regulatory bodies to ensure compliance with emerging AI transparency and fairness requirements. Finally, consider deploying AI models with limited or no persistent memory for critical applications to reduce attack surface.

Need more detailed analysis?Upgrade to Pro Console

Technical Details

Article Source
{"url":"https://thehackernews.com/2026/02/microsoft-finds-summarize-with-ai.html","fetched":true,"fetchedAt":"2026-02-17T09:54:55.305Z","wordCount":1277}

Threat ID: 69943af180d747be20a42712

Added to database: 2/17/2026, 9:54:57 AM

Last enriched: 2/17/2026, 9:55:16 AM

Last updated: 2/20/2026, 11:40:02 PM

Views: 37

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need more coverage?

Upgrade to Pro Console in Console -> Billing for AI refresh and higher limits.

For incident response and remediation, OffSeq services can help resolve threats faster.

Latest Threats