How Can the Seoul Declaration Redirect AI Development Towards Public Interest

In this post:

  • The European Union and ten countries signed a new agreement for AI safety at the Seoul AI Summit.
  • The nations have pledged to build a network of AI safety institutes to strengthen global governance capacity.
  • Sixteen tech firms working in the AI field have also voluntarily committed to safer AI development and responsibility.

This week, new commitments to AI safety were agreed upon during the AI Seoul Summit. The Seoul Declaration confirms a mutual perspective on the benefits and risks associated with AI.

South Korean President Yoon Suk Yeol co-chaired the Summit with British Prime Minister Rishi Sunak via videoconferencing. Both leaders approved the Seoul Declaration, which emphasizes the safety, inclusivity, and innovation of AI.

Also read: Dubai FinTech Summit Concludes With Over 8,000 Visitors From 118 Countries

Nations Sign the Seoul Declaration 

The European Union and ten other countries, including the United States, Canada, France, Australia, and Japan, signed the Seoul Declaration. The declaration calls for cooperation between countries on AI development in the public interest.

How Can the Seoul Declaration Redirect AI Development Towards Public Interest
Seoul AI Summit Source: Government of UK.

A Seoul Ministerial Statement was signed by 27 diverse countries, including Indonesia, Mexico, Nigeria, The United Arab Emirates, and Saudi Arabia, to name a few. The statement also enforces that safety should be the priority in the development of AI. South Korean President Yeol said,

“I’m very pleased to host the AI Seoul Summit, which will expand the scope of discussion to innovation and inclusivity, after the inaugural summit at Bletchley Park of the UK in November last year discussed AI safety.”

He also said that Korea will push to establish an AI safety research center and join a network to “boost the global safety of AI.” Yeol said the summit will consolidate efforts to promote global AI standards and governance.

The Seoul Summit Is a Continuation of Bletchley Park

The Seoul AI Summit is a continuation of the first global AI Safety Summit held in Bletchley Park, London in November last year. The Bletchley Park event was much larger, and more countries pledged their promises for safer AI development.

How Can the Seoul Declaration Redirect AI Development Towards Public Interest
AI Safety Summit Bletchley Park. Source: Bletchley Park.

While the summit bolstered international commitment to safe AI development and also added innovation and inclusivity to the agenda. Critics say that adding topics other than AI safety will dilute the agenda, as it was the point that made the Bletchley Declaration unique among many diplomatic efforts.

“But to get the upside, we must ensure it’s [AI] safe. That’s why I’m delighted we have got agreement today for a network of AI Safety Institutes.“ Rishi Sunak. 

The Seoul AI Summit, though on a smaller scale, was also important because it provided a platform to deepen partnerships with AI safety institutes. Canada, South Korea, and Japan also announced that they will establish AI safety Institutes. These institutes will complement the already established UK and United States institutes and enhance the global capacity of AI safety. 

Tech Companies Promise Safer AI Development

Sixteen top AI tech companies agreed on new commitments to develop artificial intelligence safely. Google, OpenAI, and Meta were among the US companies that made voluntary commitments at the mini Seoul Summit. 

Also read: Has Safety Taken a Back Seat at OpenAI?

China’s Zhipu.ai, Tencent, UAE’s Technology Innovation Institute, and G42 investment group also signed the document of “Frontier AI Safety Commitments.” While from the Korean side, Samsung and Naver agreed to the voluntary terms.

UN Secretary-General Antonio Guterres said in a video address at the opening session that,

“We are seeing life-changing technological advances and life-threatening new risks — from disinformation to mass surveillance to the prospect of lethal autonomous weapons.”

The UN chief was referring to the last seven months since the Bletchley meeting. Guterres stressed the need for universal guardrails and regular dialogue on tech. Many of these companies also signed the nonbinding safety commitments initiated by the White House to ensure their products are safe for the public.

The AI industry has increasingly focused on the most pressing concerns according to Cohear’s CEO, Aiden Gomez. His company is also among the ones who signed the pact. Gomez said, “It is essential that we continue to consider all possible risks while prioritizing our efforts on those most likely to create problems if not properly addressed.”

Cryptopolitan reporting by Aamir Sheikh

Subjects tagged in this post: | | | | |

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Share link:

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

Subscribe to CryptoPolitan