🔥 Trade with Pros on Discord → 21 Days Free (No Card)JOIN FREE

Anthropic blocks Chinese-controlled firms from using its AI

In this post:

  • Anthropic bans Chinese-controlled firms and their overseas branches from its AI tools.
  • The company says the move protects U.S. security and prevents military misuse of AI.
  • The decision shows AI companies are taking national security steps on their own.

Anthropic, the San Francisco AI company that built the Claude chatbot, blocked Chinese-owned firms and their overseas branches from using its AI services. The company said the step protects U.S. national security and prevents misuse by authoritarian governments.

The new rules were built on earlier bans that had already blocked access to Russia, Iran, and North Korea. Anthropic said Chinese-owned companies, even those running abroad, could still find loopholes to get advanced AI and turn it into tools for military or intelligence use.

Anthropic expands AI ban to Chinese-controlled firms

After blocking access from countries like Russia, Iran, and North Korea, Anthropic now restricts companies or organizations that are more than 50% owned by entities in these regions from using its AI tools. These rules still apply even if those companies are registered and operating outside their home countries.

In the past, firms in authoritarian states would create subsidiaries in other jurisdictions and pretend to be foreign-based businesses while still being controlled by parent companies back home. Anthropic said Chinese companies and other restricted entities would use this loophole to access, analyze, replicate, and adopt sensitive AI models that create direct risks to national security.

In its announcement, the company stressed that a Chinese-owned subsidiary operating in Europe, Southeast Asia, or North America cannot be treated as independent from its parent company. This is because it’s still bound by Chinese law, so the authoritarian government can pressure it to share sensitive information or give access to foreign technology. 

See also  How Top AI Tools Can Effectively Track Your Brand's Online Reputation - Exclusive Report

Anthropic sees this as a big risk since these foreign governments could use American technology further to develop projects like advanced surveillance networks and censorship systems. Worse, they could feed the technology into autonomous military drones and AI-guided weapons.

Regulators have raised the alarm about such risks multiple times, and some agencies responded by banning the use of Chinese-developed AI platforms like DeepSeek. This one shocked the global tech sector because the AI platform was well-known for its powerful capabilities. 

For years, Anthropic’s chief executive, Dario Amodei, has urged the U.S. to set tougher restrictions on transferring AI technologies to China. He argues that American companies must limit who can access their products to protect national security instead of waiting for the government to force them to comply. 

Amodei and other policymakers refer to Chinese firms like DeepSeek, Alibaba, Tencent, and ByteDance. They claim these firms have invested heavily in building advanced AI systems and have made quick progress compared to rivals produced in Silicon Valley. They warn that if these foreign companies access Anthropic’s models, they could close the gap and channel that knowledge into military applications that could give their governments a bigger advantage worldwide.

Chinese tech companies face tighter U.S. restrictions

Anthropic’s decision reflects Silicon Valley’s view of its role in global security. For years, most technology companies avoided matters of foreign policy, but Anthropic has chosen to step out and take proactive steps towards national defense, even if it means losing revenue. Instead of waiting for the government to develop new laws, the company is enforcing its own rules and, at the same time, urging Washington to tighten export controls before it’s too late. 

See also  Is Baidu's Profit Drop a Reflection of the High Costs of AI Development?

Analysts say Anthropic’s move will protect its reputation as a safety-focused company and show the world that the most advanced AI firms are starting to see themselves as a bigger part of national defence infrastructure. 

The company risks losing hundreds of millions in revenue over these new rules. Still, its leaders stand firm with their decision and insist that the risks of misuse outweigh any financial setback. This also helps Washington policymakers maintain America’s technological edge at a time when the U.S.–China rivalry is tense and affecting the future of important industries like chips and quantum computing. 

Anthropic may have done the math on the long-term benefits of protecting its technology and siding with national interests and seen that they outweigh any short-term losses. The company’s policy carries worldwide influence because, with a valuation of $183 billion and Amazon as one of its biggest investors, it now serves over 300,000 business customers worldwide. The number of enterprise accounts bringing in more than $100,000 annually is growing at a shocking rate. 

But despite its massive growth rate, the company’s leaders say that safety and responsibility must remain at the center of its work. 

The smartest crypto minds already read our newsletter. Want in? Join them.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...

- The Crypto newsletter that keeps you ahead -

Markets move fast.

We move faster.

Subscribe to Cryptopolitan Daily and get timely, sharp, and relevant crypto insights straight to your inbox.

Join now and
never miss a move.

Get in. Get the facts.
Get ahead.

Subscribe to CryptoPolitan