๐Ÿ”ฅ Land A High Paying Web3 Job In 90 Days LEARN MORE

Google, OpenAI, and 13 Others Pledge Not To Deploy Risky AI Models

528735
OpenAIGoogle, OpenAI, and 13 Others Pledge Not To Deploy โ€œIntolerableโ€ AI Models

In this post:

  • OpenAI, Microsoft, Google, and 13 other AI companies have agreed to a new AI safety pledge by UK authorities.ย 
  • The pledge requires that the companies will stop developing any model deemed extremely risky.
  • UKโ€™s Prime Minister says itโ€™s the first time global AI companies are pledging to the same safety standard.

Google, OpenAI, and Meta have agreed to stop developing any AI model if they cannot contain the risks. The companies signed up for the โ€œAI Safety Commitmentsโ€ on Tuesday at the AI Seoul Summit hosted by the UK and South Korea.

Also read: UK and Republic of Korea Collaborate on AI Summit

Itโ€™s a world first to have so many leading AI companies from so many parts of the globe all agreeing to the same commitments on AI safety.ย 

โ€” UKโ€™s Prime Minister, Rishi Sunak

16 AI Firms Agree To AI Safety Commitments

Per the report, a total of 16 AI companies agreed to the safety pledge, spanning the US, China, and the Middle East. 

Microsoft, Amazon, Anthropic, Samsung Electronics, and Chinese developer Zhipu.ai are also among the companies that agree to the safety standards. 

Also read: Alibaba and Tencent Invest $342 Million in AI Startup Zhipu

The AI Safety pledge requires all companies to publish their respective safety framework before another AI Action Summit in France in early 2025. The framework will explain how the companies determine the risks of their models and what risks are โ€œdeemed intolerable.โ€

AI Companies Will Pull the Plug on Risky AI Models

In the most extreme cases, the firms will โ€œnot develop or deploy a model or system at allโ€ if the risks cannot be contained, according to the report. 

The true potential of AI will only be unleashed if weโ€™re able to grip the risks. It is on all of us to make sure AI is developed safely. 

โ€” Michelle Donelan, UKโ€™s Technology Secretary

In July 2023, the US government made a similar effort to address the risks and benefits of AI. President Joe Biden met with Google, Microsoft, Meta, OpenAI, Amazon, Anthropic, and Inflection to discuss AI safeguards that ensure their AI products are safe before being released.

See also  Meta turns to nuclear energy to power its AI data centers

AI Safety Debate Heats up Over OpenAI

The conversation on AI safety has been heating up in the past months, particularly around AGIs, which aim to mimic human-like general intelligence. 

One of the companies OpenAI was caught at the center of this conversation last week after the co-founder, Ilya Sutskever, and top-level executive, Jan Leike, resigned from the company. The duo was in charge of the OpenAI Superalignment team set up to prevent their models from going rogue. 

Also read: Another OpenAI Exec, Jan Leike Quits

In his post, Leike said that โ€œover the past years, safety culture and processes have taken a backseat to shiny productsโ€ at the company. 

Leike added that โ€œOpenAI must become a safety-first AGI companyโ€ and that we must prioritize preparing for them as best we can to ensure AGI benefits all of humanity.


Cryptopolitan reporting by Ibiam Wayas

A Step-By-Step System To Launching Your Web3 Career and Landing High-Paying Crypto Jobs in 90 Days.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Cryptopolitan
Subscribe to CryptoPolitan