Artificial Intelligence Poses New Risks in Biological Weapons Development

Artificial Intelligence

Most read

Loading Most Ready posts..


  • Advances in science and AI are making it easier to create dangerous viruses, raising biosecurity concerns.
  • A small team recently synthesized a virus from scratch, highlighting the accessibility of this technology.
  • Established institutions like the Australia Group play a crucial role in preventing the misuse of AI and biotechnology for harmful purposes.

The world has witnessed unprecedented advancements in chemistry, synthetic biology, and artificial intelligence (AI) in recent years. These innovations, when combined, have ushered in a new era of scientific possibilities. However, with great power comes great responsibility, and one emerging concern is the potential misuse of this convergence of knowledge. The synthesis of viruses, once a task reserved for highly specialized experts, is becoming more accessible, raising concerns about creating deadly pathogens.

A notable turning point

A significant turning point in this arena occurred when a small team of researchers embarked on the de novo synthesis of horsepox, an orthopox virus. Although less pathogenic than smallpox, this virus became a symbol of concern due to the ease with which it was created. The team obtained DNA fragments from a horsepox outbreak in Mongolia in 1976 and constructed the DNA fragments with the help of a DNA synthesis company.

 This endeavor marked a “Rubicon in the field of biosecurity,” demonstrating that an orthopox virus could be created from scratch using commercially available materials and information at approximately $100,000.

The role of artificial intelligence

As technology continues to evolve, the role of AI in this landscape cannot be ignored. Large language models (LLMs), like those that have become publicly accessible since late 2022, have the potential to aid in the construction of chemical or biological weapons. While discussions about AI have often centered on the development of superintelligent systems, a more immediate concern is the contribution of current LLMs to the reduction of “informational barriers” in creating deadly pathogens.

 When armed with the right knowledge in synthetic biology, these models could assist individuals with minimal training in overcoming the challenges of producing a viable pathogen with pandemic potential.

The accessibility of LLMs

One striking aspect of this concern is the accessibility of LLMs. Creating synthetic pathogens is becoming increasingly feasible with just a desktop whole genome synthesizer, access to specific literature, and some scientific training. The cost of such an endeavor has dropped dramatically, with estimates suggesting it could soon require little more than $20,000. This accessibility raises questions about how to regulate and monitor the use of AI technology in this context.

Addressing the risks

It is crucial to note that developing a synthetic pathogen does not guarantee the onset of a pandemic. However, the potential risks are undeniable. To manage these risks, experts suggest looking to established institutions that have already played pivotal roles in controlling biological and chemical weapons.

The Australia group’s role

One such institution is the Australia Group (AG), founded in 1984 during the Iran-Iraq war. Initially focused on controlling precursor chemicals for unconventional weapons, the AG has evolved to harmonize the regulation of “chem-bio” components. In the face of the new age of AI and the progress in synthetic biology, the AG could serve as a platform for international cooperation to address these emerging threats. By developing comprehensive common control lists and coordinating efforts to prevent misuse of technology, the AG can play a vital role in countering the potential dangers posed by the convergence of AI and biotechnology.


As humanity faces the ongoing challenges of a global pandemic and other crises, the emergence of new threats in biological weapons is a reminder of the need for proactive measures. The rapid advancement of technology, coupled with the accessibility of AI and synthetic biology, requires a coordinated international response. The Australia Group stands as a beacon of hope, offering a platform to navigate these challenges and protect global security.

In the absence of proactive measures, the risks of misuse of AI and biotechnology loom large, and the world must unite to prevent the potentially catastrophic consequences that could result from creating and deploying synthetic pathogens. The time to act is before the world confronts a new and unforeseen threat to global stability and security.

Disclaimer: The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decision.

Share link:

Brenda Kanana

Brenda Kanana is an accomplished and passionate writer specializing in the fascinating world of cryptocurrencies, Blockchain, NFT, and Artificial Intelligence (AI). With a profound understanding of blockchain technology and its implications, she is dedicated to demystifying complex concepts and delivering valuable insights to readers.

Stay on top of crypto news, get daily updates in your inbox

Related News

Subscribe to CryptoPolitan