Signal president Meredith Whittaker has warned the general public about the risks associated with agentic artificial intelligence (AI). Speaking at the SXSW conference in Austin, Texas, she mentioned that using agentic AI could come with a privacy risk to its users.
According to Whittaker’s statement, she mentioned the need for secure forms of communication, noting that using AI agents is the same as “putting your brain in a jar.” She added that the new trend where users allow AI perform tasks on their behalf could spell issues with both privacy and security.
The use of AI agents has become rampant in the crypto industry, with users in the decentralized finance (DeFi) sector using these machines to perform tasks like executing trades, adapting, and even optimizing strategies with little or no input from users. These AI agents are also integrated into platforms to help users trade easier, smarter, and faster. However, some bots are downloaded before users can interact with an AI agent.
Signal president drums caution over AI agents
According to the Signal president, various platforms are beginning to market AI agents as a form of value-added service to make users’ lives better. In her speech, Whittaker mentioned that outside its use in the crypto sector, these agents handle various online tasks for users. For example, AI agents can purchase tickets to events, schedule events on the calendar, and message friends.
“So we can just put our brain in a jar because the thing is doing that and we don’t have to touch it, right?” the Signal president asked.
After asking the question, she then entered a lengthy discussion about the permissions that users need to give to AI agents before they can perform all these tasks to make users’ lives efficient. Whittaker added that it will include access to our web browser, calendar, messaging apps, and sometimes our credit card because it has to pay for the tickets to events that it reserves for us.
“It would need to be able to drive that [process] across our entire system with something that looks like root permission, accessing every single one of those databases — probably in the clear, because there’s no model to do that encrypted,” Whittaker added.
Whittaker’s statement calls for caution in the AI industry
In her statement, Whittaker mentioned that the AI agents do not work sufficiently alone because they must be powered by AI models. She also discussed the relationship between the process and security, noting that information is being sent to a server to be processed before it is sent back.
“So there’s a profound issue with security and privacy that is haunting this hype around agents, and that is ultimately threatening to break the blood-brain barrier between the application layer and the OS layer by conjoining all of these separate services [and] muddying their data,” Whittaker added.
She also noted that if Signal decides to integrate AI agents into its app, it will undermine the privacy of its users’ messages. She added that the agent needs to gain access to the app to text other people and also pull out information to summarize the text.
Her comments came after she was on a panel that was discussing how the AI industry is built on a surveillance mode due to the way they collect mass data. Whittaker also added that she feels the more the data collected has potential consequences that she didn’t think were good.
With agentic AI, the Signal President mentioned that “We will further undermine privacy and security because of a “magic genie bot that’s going to take care of the exigencies of life,” she concluded.
Cryptopolitan Academy: Tired of market swings? Learn how DeFi can help you build steady passive income. Register Now