In a groundbreaking move, British Prime Minister Rishi Sunak is spearheading an initiative to classify artificial intelligence as capable of “catastrophic harm” at the upcoming AI Safety Summit hosted by the U.K. The summit, scheduled for next month at Bletchley Park, aims to establish a unified global stance on the risks posed by rapidly advancing AI technology. The draft communique, circulated to attendees and obtained by Bloomberg, outlines specific concerns regarding AI’s impact on cybersecurity and biotechnology. As the international community grapples with the transformative opportunities presented by AI, Sunak seeks to position the U.K. as a leader in shaping regulatory approaches and establishing industry “guardrails.”
Concerns and safeguards
At the heart of the U.K.’s push is the acknowledgment of the potential for “significant, even catastrophic, harm” stemming from the most dangerous capabilities of AI models. The draft communique, dated October 16, underscores the need for a joint international position on the specific safety risks associated with both general-purpose AI and narrow AI. Officials, representing 28 nations and a diverse array of stakeholders, are set to deliberate on finalizing the communique by October 25.
The document highlights the dual nature of AI, recognizing its transformative opportunities, particularly in public services such as health, education, science, and clean energy. A person familiar with the matter states that this aspect will be a prominent focus during the summit, shedding light on the positive potential AI holds for societal progress.
But, the draft emphasizes that the risks, particularly at the “frontier” of AI development, necessitate clear evaluation metrics, safety testing tools, and public sector capability enhancement. Acknowledging the need for a balanced approach, the summit aims to address both the promises and challenges of AI, ensuring that advancements are harnessed responsibly to benefit humanity.
Global leaders are expected to echo the call for increased transparency from companies involved in AI development. The draft proposes the creation of clear evaluation metrics, safety testing tools, and the development of relevant public sector capability and scientific research. The document underscores that safety risks emerge not only from general-purpose AI but also from specific narrow AI that could exhibit dangerous capabilities.
Global coordination and duty in AI safety measures
While the U.K. takes the lead, the draft and other circulated documents reveal that the European Commission is advocating for an international collaboration process on frontier AI safety. The European bloc seeks an aligned approach with its own legislation on AI, focusing on potential misuse in cyber-attacks and the risk of advanced systems losing control.
The documents highlight the responsibility of developers working on powerful and potentially dangerous AI capabilities. A strong emphasis is placed on ensuring the safety of these capabilities, with “relevant actors” urged to provide context-appropriate transparency and accountability on plans to measure, monitor, and mitigate potential dangers. The U.K. plans to maintain momentum with follow-up meetings every six months, dedicated to tracking progress on managing the opportunities and risks associated with AI.
The draft communique emphasizes the necessity for AI to be designed, developed, deployed, and used in a manner that prioritizes the “common good” and aligns with principles of being human-centric, safe, trustworthy, and responsible. As the international community converges at Bletchley Park, the world watches to see whether a unified stance on AI safety will emerge, paving the way for a secure and responsible AI future.