🔥 Land A High Paying Web3 Job In 90 Days LEARN MORE

Anthropic CEO encourages caution about the safety of AI models

In this post:

  • Anthropic CEO advocated for the safety of AI models in a recent AI safety summit.
  • He mentioned that companies currently follow certain self-imposed guidelines.
  • The CEO also added that the requirements for testing AI systems should be flexible as the technology is evolving.

Dario Amodei, the CEO of Anthropic recently mentioned that artificial intelligence companies should undergo mandatory testing requirements. This also applies to his own firm, Anthropic. The goal is to make sure that these technologies are safe for the general public before they get released. 

Amodei was answering a question at an AI safety summit recently held in San Francisco by the US Departments of Commerce and State. In his response, the CEO stated, “I think we absolutely have to make the testing mandatory, but we also need to be really careful about how we do it.”

These remarks follow a release by UK and US AI Safety Institutes that contain results of their testing of the Claude 3.5 Sonnet model by Anthropic. These tests were conducted in various categories that include biological and cybersecurity applications. Previously, both OpenAI and Anthropic agreed to submit their models to government agencies.

Major companies like Anthropic follow self-imposed safety guidelines

Amodei mentioned that major companies have voluntarily agreed to certain self-imposed guidelines. This includes OpenAI’s preparedness framework and Anthropic’s responsible scaling policy. However, he further added that more work is needed to ensure safety.

“There’s nothing to really verify or ensure the companies are really following those plans in letter of spirit. They just said they will,” Amodei said. He also added, “I think just public attention and the fact that employees care has created some pressure, but I do ultimately think it won’t be enough.”

See also  OpenAI CFO Sarah Friar calls Trump the 'president of this AI generation'

Amodei believes that powerful AI systems have the capability to outperform the smartest of humans. He also thinks that such systems can become available by 2026. He adds that AI companies are still testing certain catastrophic harms and biological threats, none of which are currently real. However, these hypothetical tests can become a reality much sooner.

He also cautioned that the requirements for testing AI systems should be flexible, considering the fact that the technology is revolving. In his words, it’s a very difficult “socio-political problem” to solve.

A Step-By-Step System To Launching Your Web3 Career and Landing High-Paying Crypto Jobs in 90 Days.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Cryptopolitan
Subscribe to CryptoPolitan