🔥 Land A High Paying Web3 Job In 90 Days LEARN MORE

China’s researchers defy Meta’s Terms of Use to develop AI tool for military purposes

In this post:

  • PLA-linked Chinese researchers used Meta’s Llama model to develop an AI tool for military purposes.
  • Meta does not allow its open-source models to be used in military, warfare, and nuclear applications.
  • China’s researchers have gone against the company’s terms of use, however Meta cannot enforce the provisions as the models are open-source.

PLA-linked Chinese research institutions have used the Llama model to develop an AI tool for use in military applications. Three academic papers have been published regarding this development, and they have been reviewed by Reuters in a new report. 

Six researchers from three Chinese institutions have elaborated the development of an AI tool that processes and gathers intelligence, and helps with operational decision-making.

China tweaked Meta’s Llama 13B

They call this tool, “ChatBIT,” which has been built by fine-tuning an earlier Llama 13B large language model (LLM).

The researchers mentioned that ChatBIT was “optimized for dialogue and question-answering tasks in the military field” and it managed to outperform several AI models that were almost as capable as ChatGPT-4.

According to Reuters, in one of the research, the scientists specifically claimed that ChatBIT performs about 90% of GPT-4 but did not explain how they tested its performance or at least give details on whether the model has been used before in the field.

“It’s the first time there has been substantial evidence that PLA military experts in China have been systematically researching and trying to leverage the power of open-source LLMs, especially those of Meta, for military purposes.”

Jamestown Foundation Associate Fellow Sunny Cheung.

Jamestown Foundation is a Washington D.C-based think tank that looks at China’s emerging and deal-use technologies, including AI.

See also  Self-regulation or overregulation - OpenAI's cross-continental dilemma 

Meta reiterated that this was insignificant

While Meta has provisions in place that prevent the use of its AI models for military, warfare, and nuclear applications, it cannot practically enforce these provisions because these models are public.

In a statement Meta has also revealed that this would be insignificant because the Chinese researchers an old version of Llama model, specifically Llama 13B LLM. The social media giant is already training Llama 4.

According to Tom’s Guide, other researchers also noted that the Chinese military model ChatBIT only utilized 100,000 military dialogue records, which is a drop in the bucket due to the fact that the latest versions are trained on trillions of data points.

However, the use of open-source AI models could potentially enable the Chinese to play catch up with the latest models that American tech firms have released.

While some experts have questioned the viability of a small data set for military AI training, the development of ChatBIT could be a proof of concept, with the military institutes planning to create something more expansive.

The development has come at a time the US has already been concerned China may gain military advantage on the back of the availability of open-source models.

As such, the US has placed export controls, barring China from accessing its high-end AI models and GPUs. Some legislators are also pushing for a total ban including accessing the open-sourced models

See also  Jeff Bezos bets millions backing Nvidia rival Tenstorrent in a funding round

Land a High-Paying Web3 Job in 90 Days: The Ultimate Roadmap

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Cryptopolitan
Subscribe to CryptoPolitan