🔥 Trade with Pros on Discord → 21 Days Free (No Card)JOIN FREE

New exploit found in ServiceNow’s AI agents can be tricked to act against each other

In this post:

  • Researchers uncover a second-order prompt injection exploit in ServiceNow’s Now Assist AI agents caused by risky default configurations.
  • Attackers can manipulate agent-to-agent collaboration to steal data, modify records, or escalate privileges without detection.
  • Security experts warn that AI agents introduce new attack vectors and urge organizations to review configurations and tighten controls.

A new exploit in ServiceNow’s Now Assist platform can allow malicious actors to manipulate its AI agents into performing unauthorized actions, as detailed by SaaS security firm AppOmni.

Default configurations in the software, which enable agents to discover and collaborate with one another, can be weaponized to launch prompt injection attacks far beyond a single malicious input, says chief of SaaS Security at AppOmni, Aaron Costello.

The flaw allows an adversary to seed a hidden instruction inside data fields that an agent later reads, which may quietly enlist the help of other agents on the same ServiceNow team, setting off a chain reaction that can lead to data theft or privilege escalation. 

Costello explained the scenario as “second-order prompt injection,” where the attack emerges when the AI processes information from another part of the system.

“This discovery is alarming because it isn’t a bug in the AI; it’s expected behavior as defined by certain default configuration options,” he noted on AppOmni’s blog published Wednesday.

ServiceNow Assist AI agents exposed to coordinated attack

Per Costello’s investigations cited in the blog, many organizations deploying Now Assist may be unaware that their agents are grouped into teams and set to discover each other automatically to perform a seemingly “harmless task” that can expand into a coordinated attack. 

“When agents can discover and recruit each other, a harmless request can quietly turn into an attack, with criminals stealing sensitive data or gaining more access to internal company systems,” he said.

See also  Cardano price analysis: ADA shows dormancy at $0.4483 after a brief bullish spell

One of Now Assist’s selling points is its ability to coordinate agents without a developer’s input to merge them into a single workflow. This architecture sees several agents with different specialties collaborate if one cannot complete a task on its own. 

For agents to work together behind the scenes, the platform requires three elements. First, the underlying large language model must support agent discovery, a capability already integrated into both the default Now LLM and the Azure OpenAI LLM

Second, the agents must belong to the same team, something that occurs automatically when they are deployed to environments such as the default Virtual Agent experience or the Now Assist Developer panel. Lastly, the agents must be marked as “discoverable,” which also happens automatically when they are published to a channel.

Once these conditions are satisfied, the AiA ReAct Engine routes information and delegates tasks among agents, operating like a manager directing subordinates. Meanwhile, the Orchestrator performs discovery functions and identifies which agent is best suited to take on a task. 

It only searches among discoverable agents within the team, sometimes even more than administrators realize. This interconnected architecture becomes vulnerable when any agent is configured to read data not directly submitted by the user initiating the request. 

“When the agent later processes the data as part of a normal operation, it may unknowingly recruit other agents to perform functions such as copying sensitive data, altering records, or escalating access levels,” Costello surmised.

See also  U.S. banks have a big little dilemma going on for them

AI agent attack can escalate privileges to breach accounts

AppOmni found that Now Assist agents inherit permissions and act under the authority of the user who initiated the workflow. A low-level attacker can plant a harmful prompt that gets activated during the workflow of a more privileged employee, getting access without ever breaching their account.

“Because AI agents operate through chains of decisions and collaboration, the injected prompt can reach deeper into corporate systems than administrators expect,” AppOmni’s analysis read.

AppOmni said that attackers can redirect tasks that appear benign to an untrained agent but become harmful once other agents amplify the instruction through their specialized capabilities. 

The company warned that this dynamic creates opportunities for adversaries to exfiltrate data without raising suspicion. “If organizations aren’t closely examining their configurations, they’re likely already at risk,” Costello reiterated.

LLM developer Perplexity, said in an early November blog post that novel attack vectors have broadened the pool of potential exploits. 

“For the first time in decades, we’re seeing new and novel attack vectors that can come from anywhere,” the company wrote.

Software engineer Marti Jorda Roca of NeuralTrust said the public must understand that “there are specific dangers using AI in the security sense.”

Want your project in front of crypto’s top minds? Feature it in our next industry report, where data meets impact.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...

- The Crypto newsletter that keeps you ahead -

Markets move fast.

We move faster.

Subscribe to Cryptopolitan Daily and get timely, sharp, and relevant crypto insights straight to your inbox.

Join now and
never miss a move.

Get in. Get the facts.
Get ahead.

Subscribe to CryptoPolitan