🔥 Land A High Paying Web3 Job In 90 Days LEARN MORE

AI Models can be Risky if Left to Rogue Elements

In this post:

  • The US government is putting up guardrails to restrict AI model exports to Russia and China.
  • Deepfakes and biological weapons are supposed to be urgent risks.
  • A new law will help the Biden administration control AI exports, which will slow down China’s progress.

The Commerce Department under the Biden administration is aiming for a new compliance initiative to further restrict the export of open or closed-source artificial intelligence models. The initiative is an effort to safeguard US interests and AI technology from Russia and China and will complement the steps taken over the past two years to block Chinese access to the latest advanced computer chips.

The US effort is to put more strict protective barriers around the core software of large language models that power apps like ChatGPT, reported Reuters, citing three sources that are familiar with the matter. While the news provider has claimed that researchers in the private sector and government are worried about the fact that US adversaries may use the technology for aggressive cyber attacks and develop biological weapons, the Chinese Embassy has opposed the move and described it as unilateral bullying and economic coercion.

Deepfakes are a lethal disinformation weapon

The threats that the US dreads are many and can be used by non-state actors that are backed by states. Deepfakes can be used as an effective weapon for propaganda, as they involve realistic but fake videos created by AI tools. 

These types of videos are already emerging on social media. While such media content has been around for the past few years, developed with animation and rendering software, generative AI tools have made it easy to produce for anyone, and rogue actors can leverage them more easily than ever before to manipulate public opinions on sensitive issues, especially during election campaigns.

See also  Netherlands government in talks with Nvidia and AMD to supply hardware for a proposed AI facility

Social media platforms like YouTube, Facebook, and Twitter have already taken steps to curtail deepfakes, but tactics to develop and publish them are also changing with the development of technology. At the moment, tools from companies like Microsoft and OpenAI can be used to create content for spreading disinformation.

A much bigger concern is that AI models can leak information for developing biological weapons, according to researchers at Rand Corporation and Gryphon Scientific, and US intelligence agencies, academic experts, and think tanks are worried about the possibility of AI going into the hands of rogue elements.

The Gryphon study identified how LLMs can churn out expert knowledge and that of a doctoral level that can aid in developing viruses that have pandemic capabilities, and this can be used by non-state actors as biological weapons.

Amplified cyberattacks with AI models

The Department of Homeland Security has also expressed concerns that cyber attacks on crucial infrastructure like railways and pipelines could be executed with AI by developing new tools that will be capable of larger-scale and more complex cyber attacks and more rapidly.

The agency also said that China is developing artificial intelligence software that can be used for malware attacks, and it is also working on AI technologies that could potentially sabotage the cyber defenses of the country.

See also  AI agents with trading capabilities: the next hot trend?
Functions and entities forming Chinese Communist Party’s ecosystem. Source: Microsoft.

Back in February, Microsoft issued a report in which it noted that it has identified cyber groups involved in hacking activities that are perfecting their hacking campaigns by utilizing LLMs, and they have affiliations with the military intelligence of Russia, the North Korean and Chinese governments, and the Revolutionary Guards of Iran.

The company announced a ban on state-funded cyber groups that are using its AI products and services. On Wednesday, a group of statesmen proposed a bill that will help the Biden government enforce controls on the export of AI models to prevent their exposure to potential adversaries.

Experts on the subject have said that Washington is trying to avoid overbearing regulations that could suppress innovation as they are trying to foster innovation along with possible solutions for risks associated with AI. They also highlighted that enforcing strict regulations by strictly regulating AI advancement will create a vacuum for overseas competitors and negatively impact the fields of infrastructure, national security, and drug discovery.

A Step-By-Step System To Launching Your Web3 Career and Landing High-Paying Crypto Jobs in 90 Days.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Cryptopolitan
Subscribe to CryptoPolitan