Subj : Second-order prompt injection can turn AI into a malicious inside To : All From : TechnologyDaily Date : Fri Nov 21 2025 15:00:09 Second-order prompt injection can turn AI into a malicious insider Date: Fri, 21 Nov 2025 14:45:26 +0000 Description: ServiceNow AI agents hijacked into acting against each other, experts warn. FULL STORY ======================================================================AppOmni warns ServiceNows Now Assist AI can be abused via secondorder prompt injection Malicious lowprivileged agents can recruit higherprivileged ones to exfiltrate sensitive data Risk stems from default configurations; mitigations include supervised execution, disabling overrides, and monitoring agents Weve all heard of malicious insiders, but have you ever heard of malicious insider AI? Security researchers from AppOmni are warning ServiceNows Now Assist generative artificial intelligence (GenAI) platform. can be hijacked to turn against the user and other agents. ServiceNows Now Assist is a platform that offers agent-to-agent collaboration. That means an AI agent can call upon a different AI agent to get certain things done. So, if the primary AI agent is malicious, they can instruct the secondary agent, with higher privileges, to do harmful things, such as stealing sensitive files or escalating privileges. Second-order prompt injection For example, a low-privileged Workflow Triage Agent receives a malformed customer request that triggers it to generate an internal task asking for a full context export of an ongoing case. The task is automatically passed to a higher-privileged Data Retrieval Agent, which interprets the request as legitimate and compiles a package containing sensitive information - names, phone numbers, account identifiers, and internal audit notes - and sends it to an external notification endpoint that the system incorrectly trusts. Because both agents assume the other is acting legitimately, the data leaves the system without any human ever reviewing or approving the action. For this to work, though, the Now Assist platform needs to be left in default setup. "This discovery is alarming because it isn't a bug in the AI; it's expected behavior as defined by certain default configuration options," said Aaron Costello, chief of SaaS Security Research at AppOmni. "When agents can discover and recruit each other, a harmless request can quietly turn into an attack, with criminals stealing sensitive data or gaining more access to internal company systems. These settings are easy to overlook." The vulnerability was dubbed second-order prompt injection. While ServiceNow said the system works as intended and it wont be making any changes, it did update its documentation to state potential risks more clearly, The Hacker News reports. To mitigate these threats, users are advised to configure supervised execution mode for privileged agents, disable the autonomous override property, segment agent duties by team, and monitor AI agents for suspicious behavior. Follow TechRadar on Google News and add us as a preferred source to get our expert news, reviews, and opinion in your feeds. Make sure to click the Follow button! And of course you can also follow TechRadar on TikTok for news, reviews, unboxings in video form, and get regular updates from us on WhatsApp too. ====================================================================== Link to news story: https://www.techradar.com/pro/security/second-order-prompt-injection-can-turn- ai-into-a-malicious-insider --- Mystic BBS v1.12 A49 (Linux/64) * Origin: tqwNet Technology News (1337:1/100) .