🔥 Trade with Pros on Discord → 21 Days Free (No Card)JOIN FREE

AI browsers are the next big target for hackers

In this post:

  • AI browsers like ChatGPT, Atlas, and Comet are vulnerable to prompt injection attacks.
  • Hackers can use fake prompts to access logged-in accounts and steal data from an AI-powered browser.
  • Security researchers advise users to avoid connecting AI browsers to sensitive accounts or personal data.

AI browsers like Atlas from OpenAI and Comet from Perplexity promise convenience. But they come with major cybersecurity risks, forming a new playground for hackers.

AI powered web browsers compete with traditional browsers like Google Chrome and Brave, aiming to attract billions of daily internet users.

A few days ago, OpenAI released Atlas, while Perplexity’s Comet has been around for months. AI-powered browsers can type and click through pages. Users can tell it to book a flight, summarize emails, or even fill out a form.

Basically, AI-powered browsers are designed to act as digital assistants and navigate the web autonomously. They are being hailed as the next big leap in online productivity.

Security researchers flag AI browser flaws

But most consumers are unaware of the security risks that come with the use of AI browsers. Such browsers are vulnerable to sophisticated hacks through a new phenomenon called prompt injection.

Hackers can exploit AI web browsers, gain access to users’ logged-in sessions, and perform unauthorized actions. For example, hackers can access emails, social media accounts, or even view banking details and move funds.

According to recent research by Brave, hackers can embed hidden instructions inside web pages or even images. When an AI agent analyzes this content and sees the hidden instructions, it can be tricked into executing them as if they were legitimate user commands. AI web browsers cannot tell the difference between genuine and fake user instructions.

See also  Sony explores sale of Israeli chipset unit

Brave engineers experimented with Perplexity’s Comet and tested its reaction to prompt injection. Comet was found to process invisible text hidden within screenshots. This approach enables attackers to control browsing tools and extract user data with ease.

Brave’s engineers called these vulnerabilities a “systemic challenge facing the entire category of AI-powered browsers.”

Prompt injection is hard to fix

Security researchers and engineers say that prompt injection is difficult to fix. That’s because artificial intelligence models do not understand where instructions come from. They can’t differentiate between genuine and fake prompts.

Traditional software can tell the difference between safe input and malicious code, but large language models (LLMs) struggle with that. LLMs process everything, including user requests, website text, and even hidden data, and treat it as one big conversation.

That’s why prompt injection is dangerous. Hackers can easily hide fake instructions inside content that looks safe and steal sensitive information.

AI companies admit prompt injection is a serious threat

Perplexity stated that such attacks don’t rely on code or stolen passwords but instead manipulate the AI’s “thinking process.” The company built multiple defense layers around Comet to stop prompt injection attacks. It uses machine learning models that detect threats in real time and has integrated guardrail prompts that keep the AI focused on user intent. Moreover, the browser requires mandatory user confirmation for sensitive actions like sending an email or purchasing an item.

See also  Advancements in Wind Engineering

Security researchers believe AI-powered browsers should not be trusted with sensitive accounts or personal data until major improvements are rolled out. Users can still utilize AI web browsers, but with no access to tools, disabled automated actions, and should avoid using them when logged in to banking accounts, emails, or healthcare apps.

The Chief Information Security Officer (CISO) of OpenAI, Dane Stuckey, acknowledged the dangers of prompt injection and wrote on X, “One emerging risk we are very thoughtfully researching and mitigating is prompt injections, where attackers hide malicious instructions in websites, emails, or other sources to try to trick the agent into behaving in unintended ways.”

He explained that OpenAI’s goal is to make people “trust ChatGPT agent[s] to use your browser, the same way you’d trust your most competent, trustworthy, and security-aware colleague or friend.” Stuckey said the team at OpenAI is “working hard to achieve that.”

Join a premium crypto trading community free for 30 days - normally $100/mo.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...

- The Crypto newsletter that keeps you ahead -

Markets move fast.

We move faster.

Subscribe to Cryptopolitan Daily and get timely, sharp, and relevant crypto insights straight to your inbox.

Join now and
never miss a move.

Get in. Get the facts.
Get ahead.

Subscribe to CryptoPolitan