🔥 Trade with Pros on Discord → 21 Days Free (No Card)JOIN FREE

23,000 active hosts, 130 countries: Hackers hijack open-source AI models 

In this post:

  • Security researchers have found over 175,000 unique AI hosts across 130 countries, with 23,000 “persistent” servers forming a strong network for criminal activity.
  • Hackers are hijacking these unprotected AI models to steal computing power, bypass security filters, and resell access to other criminals.
  • Over 48% of these exposed systems allow hackers to potentially execute code or access private internal databases through simple text prompts.

About 175,000 private servers are reportedly exposed to the public internet, giving hackers the opportunity to carry out their illicit activities. 

The problem was reported by the security researchers, SentinelOne and Censys, who tracked 7.23 million observations in over 300 days. 

Hackers exploit Ollama setting

A recent report from SentinelOne and Censys found that over 175,000 private AI servers are accidentally exposed to the internet. These systems use Ollama, an open-source software that lets people run powerful AI models, like Meta’s Llama or Google’s Gemma, on their own computers instead of using a website like ChatGPT. 

By default, Ollama only talks to the computer it is installed on. However, a user can change the settings to make it easier to access remotely, which can accidentally expose the entire system to the public internet.

They tracked 7.23 million observations over nearly 300 days and discovered that while many of these AI “hosts” are temporary, about 23,000 of them stay online almost all the time. These “always-on” systems are perfect targets for hackers because they provide free, powerful hardware that is not monitored by any big tech company.

In the United States, about 18% of these exposed systems are in Virginia, likely due to the high density of data centers there. China has 30% of hosts located in Beijing. 

See also  iPhone sales in China dropped 6% year-over-year ahead of iPhone 17

Surprisingly, 56% of all these exposed AI systems are running on home or residential internet connections. This is a major problem because hackers can use these home IP addresses to hide their identity. 

When a hacker sends a malicious message through someone’s home AI, it looks like it is coming from a regular person rather than a criminal botnet.

How are criminals using these hijacked AI systems?

According to Pillar Security, a new criminal network known as Operation Bizarre Bazaar is actively hunting for these exposed AI endpoints. They look for systems running on the default port 11434 that don’t require a password. Once they find one, they steal the “compute” and sell it to others who want to run AI tasks for cheap, like generating thousands of phishing emails or creating deepfake content.

Between October 2025 and January 2026, the security firm GreyNoise recorded over 91,403 attack sessions targeting these AI setups. They found two main types of attacks. 

  • The first uses a technique called Server-Side Request Forgery (SSRF) to force the AI to connect to the hacker’s own servers. 
  • The second is a massive “scanning” campaign where hackers send thousands of simple questions to find out exactly which AI model is running and what it is capable of doing.
See also  8 Ways to Gather Data for Your AI Startup

About 48% of these systems are configured for “tool-calling.” This means the AI is allowed to interact with other software, search the web, or read files on the computer. 

If a hacker finds a system like this, they can use “prompt injection” to trick the AI. Instead of asking for a poem, they might tell the AI to “list all the API keys in the codebase” or “summarize the secret project files.” Since there is no human watching, the AI often obeys these commands.

The Check Point 2026 Cyber Security Report shows that total cyber attacks increased by 70% between 2023 and 2025. In November 2025, Anthropic reported the first documented case of an AI-orchestrated cyber espionage campaign where a state-sponsored group used AI agents to perform 80% of a hack without human help.

Several new vulnerabilities, like CVE-2025-1975 and CVE-2025-66959, were discovered just this month. They are flaws that allow hackers to crash an Ollama server by sending it a specially crafted model file. 

Because 72% of these hosts use the same specific file format called Q4_K_M, a single successful attack could take down thousands of systems at once.

The smartest crypto minds already read our newsletter. Want in? Join them.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...

- The Crypto newsletter that keeps you ahead -

Markets move fast.

We move faster.

Subscribe to Cryptopolitan Daily and get timely, sharp, and relevant crypto insights straight to your inbox.

Join now and
never miss a move.

Get in. Get the facts.
Get ahead.

Subscribe to CryptoPolitan