The rapid advancement of artificial intelligence has led to numerous breakthroughs in science and healthcare, revolutionizing various fields for the better. However, it also raises serious concerns about potential misuse, particularly in the context of bioterrorism and biological weapons. The infamous case of Aum Shinrikyo, a Japanese cult attempting a bioterror attack using botulinum toxin, exemplifies how access to contemporary AI-Language Models could potentially increase the risk of devastating bioterrorism events.
Large language models like ChatGPT and AI-powered biological design tools may act as threat multipliers, allowing ill-intentioned actors to access dual-use scientific knowledge and accelerate the development of dangerous biological agents. As the intersection of AI and biology becomes more accessible, urgent measures must be taken to strengthen biosecurity and AI governance.
The threat of AI-language models in bioterrorism
The ease with which large language models like ChatGPT can provide information on potentially dangerous pathogens and bioweapons is a major concern. A recent exercise at MIT demonstrated that in just one hour, ChatGPT instructed non-scientist students on four potential pandemic pathogens and how to acquire genetic material without detection. Information dissemination could empower ill-intentioned actors to explore biological weapons without the necessary expertise.
Lowering the barrier of tacit knowledge
While AI language models cannot provide hands-on experience, they can lower the barrier of tacit knowledge needed for biological agent creation. As AI systems continue to advance, they may facilitate the automation of science and the development of biological weapons covertly, reducing the need for many scientists in such projects.
The role of biological design tools
Specialized AI tools like AlphaFold2 and RFdiffusion are already pushing the boundaries of biological design capabilities. While they have the potential to bring significant advancements in medicine, they could also exacerbate biological risks by enabling the creation of highly dangerous pathogens with unprecedented properties. Such tools may challenge existing measures to control access to dangerous toxins and pathogens, as they generate agents with dangerous properties not covered by conventional screening methods.
Mandatory gene synthesis rules
Universal gene synthesis screening is essential to mitigate risks arising from AI’s intersection with biology. This involves screening orders and customers to ensure that only legitimate researchers access genetic material for controlled agents. Implementing mandatory baseline screening for gene synthesis providers and other crucial service providers is necessary to prevent the exploitation of weaknesses in supply chain security.
The need for AI governance
AI-specific interventions are vital to managing the risks of large language models and biological design tools. Pre-release evaluations of model capabilities can ensure that dangerous functionalities are not present upon public release. Releasing models through structured access methods, rather than open-sourcing, allows continuous updating of safeguards. Policymakers must grapple with who should have access to dual-use scientific capabilities and consider input from diverse voices across disciplines, demographics, and geographies.
Striking a balance between enabling legitimate scientific research and preventing the dissemination of dangerous knowledge is crucial. Public versions of language models like ChatGPT may choose not to provide detailed instructions on creating dangerous pathogens. At the same time, specialized access methods can be implemented for legitimate scientists with appropriate training and approval.
The convergence of AI and biology has vast potential for positive impacts on science and healthcare. However, the same advancements pose serious risks, particularly concerning bioterrorism and biological weapons. As AI language models and biological design tools become increasingly accessible, urgent measures are needed to strengthen biosecurity and governance. Implementing mandatory gene synthesis screening, conducting pre-release evaluations of model capabilities, and ensuring differentiated access methods for legitimate scientists are key steps in addressing these emerging challenges. Swift action by policymakers can not only enhance safety but also pave the way for responsibly harnessing the benefits of artificial intelligence.