COMING SOON: A New Way to Earn Passive Income with DeFi in 2025 LEARN MORE

OpenAI suspends China-linked accounts using ChatGPT for surveillance tool

In this post:

  • After an investigation, OpenAI suspended several accounts that appeared to be using ChatGPT to develop surveillance tools.
  • According to a report published by the company, the suspended accounts were identified based on behavioral patterns and linked to China 
  • This move by OpenAI emphasizes its commitment to preventing authoritarian regimes from using AI technologies for oppressive purposes.

OpenAI, America’s top artificial intelligence startup, just banned multiple accounts on ChatGPT, accusing them of being involved in developing a surveillance tool. 

According to a company report published in February 2025, the suspended accounts allegedly have ties to China. These accounts were reportedly using OpenAI’s models to generate detailed descriptions of a social media listening tool, which they claimed was used to provide real-time feedback about protests in the West to Chinese security services. 

The banned accounts had also found a way to use OpenAI’s models to debug code seemingly intended to implement such a tool. 

Why China is being suspected 

OpenAI has policies put in place to prevent the use of AI for monitoring communications or unauthorized surveillance of individuals, including activities by or on behalf of governments that seek to suppress personal freedoms and rights.

OpenAI was able to identify the accounts in question based on behavioral patterns among other findings. Some of the things the accounts were reportedly using OpenAI’s models for included assisting with analyzing documents, generating sales pitches, descriptions of tools for monitoring social media activity, and researching political actors and topics.

The accounts are suspected of having Chinese origins because their activities occurred primarily during mainland Chinese business hours, and the accounts prompted OpenAI’s models with the Chinese language. 

OpenAI also found that those behind the accounts were using the tools in a manner consistent with manual prompting rather than automation. In some instances, the AI firm discovered that one account may have had multiple operators.

See also  Meta announce AI disclosure laws ahead of Canadian elections

One of the major activities of the operation involved generating detailed descriptions, consistent with sales pitches, for what was described as the “Qianyue Overseas Public Opinion AI Assistant.” 

This tool was designed to analyze posts and comments from platforms such as X, Facebook, YouTube, Instagram, Telegram, and Reddit and feed Chinese authorities what it learns. 

Unfortunately, OpenAI has not been able to independently verify these descriptions, and the incident has put the US on high alert. According to OpenAI’s principal investigator, Ben Nuimmo,  this case is an example of how authoritarian regimes like China can try to leverage US-built tech against the US and its allies.

“This is a pretty troubling glimpse into the way one non-democratic actor tried to use democratic or US-based AI for non-democratic purposes, according to the materials they were generating themselves.”

OpenAI has also revealed that the accounts referenced other AI tools, including a version of Llama, Meta’s open-source AI model. 

This case comes not long after DeepSeek’s creation 

OpenAI has been sounding the alarm about the dangers of Chinese AI, especially after DeepSeek melted faces with its R1 reasoning model, which beat many other US-made models in benchmarks and caused a huge tech sell-off in the US markets. 

See also  ChatGPT hit with privacy complaint over defamatory hallucinations

DeepSeek’s superiority is debatable, and its makers claim to have created it at a fraction of the cost most US models used to build theirs. This claim has been met with mixed reactions from the public, but many experts are convinced it was an exaggeration. 

OpenAI is not only convinced the team behind DeepSeek spent more on it, but it has also accused DeepSeek of distilling the output of its models to make its own. 

According to a Bloomberg report, Microsoft, a major investor in OpenAI, is now investigating whether data belonging to the firm has been used without authorization.

David Sacks, the recently appointed White House “AI and crypto czar”, has echoed OpenAI’s concerns, suggesting that DeepSeek may have used the American AI firm’s models to get better, a process termed knowledge distillation.

“There’s substantial evidence that what DeepSeek did here is they distilled the knowledge out of OpenAI’s models,” Sacks said

He added that over the next couple of months, leading AI companies in America will start taking steps to prevent distillation in hopes it would slow down some of the copycat models.

Cryptopolitan Academy: Coming Soon - A New Way to Earn Passive Income with DeFi in 2025. Learn More

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Subscribe to CryptoPolitan