China’s President Xi Jinping and US President Joe Biden agreed at a historic conference last month in San Francisco that it is critical to “address the risk of advanced AI systems and raise the security of AI.” Although the two superpowers have committed to cooperating to control the military’s use of AI, the lack of specifics and their continued disagreements make this unlikely to happen.
As the race for AI supremacy in military applications intensifies between China and the United States, concerns loom regarding their ability to move beyond geopolitical rivalries and effectively manage the risks associated with advanced AI systems. Underlining the need to regulate the military use of AI, the leaders of both nations failed to provide concrete details during their recent summit, leaving the international community questioning the extent of their commitment.
The 2019 collaboration between China, the US, and 96 other nations on guidelines for lethal autonomous weapon systems (LAWS) enhanced with AI highlights a shared acknowledgment of the necessity to retain human responsibility in their use. Yet, the non-binding nature of these guidelines and the lack of a common definition for LAWS present significant obstacles. Dr. Guangyu Qiao-Franco, an assistant professor specializing in politics and AI, expresses skepticism about the ability of the US and China to transcend existing agreements and collaborate effectively. The underlying motives of limiting technological development and increasing technology independence continue to strain relations between the two nations.
The multifaceted applications of AI in military operations raise concerns about minimizing civilian impact. Neil Davison, a senior scientific and policy adviser at the International Committee of the Red Cross, emphasizes the need for regulations to focus on specific AI applications rather than general principles. Image recognition for target identification, data analysis for battlefield decision-making, and the potential for AI-driven cyberattacks pose challenges that require tailored regulatory frameworks.
Mutual vulnerability – A potential catalyst for cooperation
The lack of a clear definition for lethal autonomous weapon systems complicates efforts to regulate or ban them through international treaties. Divisions emerge between developed and developing nations, with richer states advocating for narrowly defined restrictions to enable precise and stable AI-led weapons. China’s unique position, emphasizing its role as the voice of the Global South, presents a challenge, as it simultaneously invests substantially in AI research and presents a narrow definition of LAWS.
The mutual vulnerability arising from the deployment of military AI systems may serve as a catalyst for China and the US to establish binding regulations. Backchannel meetings between the two nations, including discussions between Tsinghua University’s Centre for International Security and Strategy and the Washington-based Brookings Institution, indicate a willingness to engage in dialogues on AI. Dr. Lora Saalman, a senior researcher at the Stockholm International Peace Research Institute, suggests that a joint US-China statement on the importance of human control in nuclear decision-making could be a viable starting point.
As China and the US navigate the complexities of AI risks, the question remains: Can these global powers overcome their geopolitical differences and establish a common set of binding regulations for the military application of AI? The challenges of defining terms, divergent perspectives on LAWS, and the rapid pace of AI technological advancement create formidable barriers. Yet, with mutual vulnerability in the spotlight, there is a glimmer of hope that collaborative efforts could emerge, potentially shaping the future of responsible AI use in the military. Can these nations find common ground and lead the way in crafting effective regulations, or will geopolitical tensions continue to impede progress in this critical realm?