Skip to main content
Press slash or control plus K to focus the search. Use the arrow keys to navigate results and press enter to open a threat.
Reconnecting to live updates…

Second order prompt injection attacks on ServiceNow Now Assist

0
Medium
Published: Thu Dec 04 2025 (12/04/2025, 17:52:13 UTC)
Source: Reddit NetSec

Description

Second order prompt injection attacks targeting ServiceNow's Now Assist involve malicious inputs that are initially benign but later trigger harmful behavior when processed by the AI assistant. These attacks exploit the AI's prompt handling by injecting payloads that activate in subsequent interactions, potentially leading to unauthorized command execution or data leakage. Although no known exploits are currently in the wild, the medium severity rating reflects the risk posed by this novel attack vector. European organizations using ServiceNow Now Assist should be aware of this threat due to its potential impact on confidentiality and integrity. Mitigation requires careful input validation, monitoring AI outputs for anomalous behavior, and restricting AI assistant permissions. Countries with high ServiceNow adoption and critical infrastructure reliance on ITSM platforms, such as Germany, the UK, France, and the Netherlands, are more likely to be affected. Given the attack complexity and lack of authentication bypass, the suggested severity is medium. Defenders should prioritize understanding AI prompt injection risks and implement layered controls to reduce exposure.

AI-Powered Analysis

AILast updated: 12/04/2025, 17:56:50 UTC

Technical Analysis

The reported threat involves second order prompt injection attacks on ServiceNow's Now Assist, an AI-driven IT service management assistant. Unlike direct prompt injections where malicious input immediately influences AI behavior, second order injections embed malicious payloads in seemingly innocuous inputs that are stored and later incorporated into AI prompts during subsequent interactions. This delayed activation can bypass initial input sanitization and exploit the AI's contextual understanding to execute unauthorized commands, manipulate responses, or exfiltrate sensitive information. The attack leverages the AI's reliance on dynamic prompt construction, where stored user data or system-generated content is reused in prompts without adequate filtering. While no specific vulnerable versions or patches are identified, the threat highlights a novel attack vector against AI assistants integrated into enterprise platforms. The medium severity rating reflects moderate impact potential, considering the attack requires crafted inputs and may not lead to immediate system compromise but can undermine data confidentiality and integrity over time. The lack of known exploits suggests this is an emerging issue requiring proactive attention. The source of information is a recent Reddit NetSec discussion linking to an AppOmni research article, indicating early-stage community awareness and analysis.

Potential Impact

For European organizations, the impact of second order prompt injection attacks on ServiceNow Now Assist can be significant, especially for entities relying heavily on automated IT service management workflows. Confidentiality risks arise if the AI assistant inadvertently reveals sensitive information embedded in prompts or manipulated responses. Integrity can be compromised if attackers influence AI-driven decisions or commands, potentially disrupting service operations or causing erroneous ticket handling. Availability impact is less direct but could occur if the AI assistant's behavior leads to operational inefficiencies or triggers automated workflows that degrade service performance. Given ServiceNow's widespread adoption in sectors such as finance, healthcare, and government across Europe, the threat could affect critical infrastructure and sensitive data processing. The attack complexity and requirement for crafted inputs limit broad exploitation but do not eliminate targeted attacks against high-value organizations. The absence of known exploits currently reduces immediate risk but underscores the need for vigilance as AI assistant usage grows.

Mitigation Recommendations

To mitigate second order prompt injection risks in ServiceNow Now Assist, organizations should implement multi-layered defenses beyond generic advice. First, enforce strict input validation and sanitization on all user inputs stored for later AI prompt construction, including escaping or removing potentially malicious tokens. Second, audit and monitor AI assistant outputs for anomalous or unexpected responses that may indicate prompt manipulation. Third, limit the scope of AI assistant permissions to the minimum necessary, preventing execution of critical commands or access to sensitive data without additional verification. Fourth, implement logging and alerting on unusual AI interactions or prompt content changes to detect early signs of exploitation. Fifth, collaborate with ServiceNow to stay informed about patches or configuration recommendations addressing prompt injection vulnerabilities. Finally, conduct regular security training for administrators and users on the risks of AI prompt manipulation and safe data handling practices within ITSM platforms.

Need more detailed analysis?Get Pro

Technical Details

Source Type
reddit
Subreddit
netsec
Reddit Score
1
Discussion Level
minimal
Content Source
reddit_link_post
Domain
appomni.com
Newsworthiness Assessment
{"score":27.1,"reasons":["external_link","established_author","very_recent"],"isNewsworthy":true,"foundNewsworthy":[],"foundNonNewsworthy":[]}
Has External Source
true
Trusted Domain
false

Threat ID: 6931cb4f911f2f20c4b34d8e

Added to database: 12/4/2025, 5:56:31 PM

Last enriched: 12/4/2025, 5:56:50 PM

Last updated: 12/5/2025, 1:46:17 AM

Views: 6

Community Reviews

0 reviews

Crowdsource mitigation strategies, share intel context, and vote on the most helpful responses. Sign in to add your voice and help keep defenders ahead.

Sort by
Loading community insights…

Want to contribute mitigation steps or threat intel context? Sign in or create an account to join the community discussion.

Actions

PRO

Updates to AI analysis require Pro Console access. Upgrade inside Console → Billing.

Please log in to the Console to use AI analysis features.

Need enhanced features?

Contact root@offseq.com for Pro access with improved analysis and higher rate limits.

Latest Threats