šŸ”„ Land A High Paying Web3 Job In 90 Days LEARN MORE

Employees claim OpenAI and Google DeepMind hiding AI risksĀ 

In this post:

  • Former employees accuse AI firms of hiding risks that could trigger human extinction.
  • They say companies put profit first while avoiding “effective oversight.”
  • Whistleblowers are discouraged by the threat of retaliation from employers.

Several current and former employees of OpenAI and Google DeepMind accused their companies of hiding AI risks that could potentially trigger human extinction. In an open letter, the workers alleged that AI firms are putting profit first while avoiding ā€œeffective [governmental] oversight.ā€

Also read: OpenAI Lines up GPT-4 Successor Amid Safety Concerns

The open letter was signed by 11 former employees of OpenAI, two from Google DeepMind, and endorsed by the ā€˜godfather of AIā€™ Geoffrey Hinton, formerly of Google. It said profit motives and regulatory loopholes allow companies to cover up threats posed by advanced artificial intelligence.

AI could lead to ā€˜human extinctionā€™

According to the letter, AI firms such as OpenAI and Google DeepMind, creator of Gemini, have not publicly shared information about the inadequate safeguards and risk levels of their systems because they are not required to do so.

Without regulatory oversight, the AI programs could cause major harm to humans. The employees warned:

ā€œThese risks range from the further entrenchment of existing inequalitiesā€¦ to the loss of control of autonomous AI systems potentially resulting in human extinction.ā€

As noted by the letterā€™s signees, AI firms themselves have acknowledged these risks. In May 2023, the CEOs of OpenAI, Anthropic and Google DeepMind co-signed an open letter by the Center for AI Safety. The letter simply read, ā€œMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.ā€

See also  Google launches Gemini 2.0, claims it's a model for ā€œeverythingā€

AI companies have increasingly come under scrutiny for placing novel products and profit over safety and security. ChatGPT-maker OpenAI noted the concerns raised in the letter and said it is working to develop artificial intelligence that is safe.

ā€œWeā€™re proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk,ā€ OpenAI spokeswoman Lindsey Held told the New York Times.

ā€œWe agree that rigorous debate is crucial given the significance of this technology, and weā€™ll continue to engage with governments, civil society and other communities around the world.ā€

AI

Whistleblowers fear retaliation

The former OpenAI and Google employees said they would like to fill the oversight role while regulators work out laws that compel AI developers to publicly disclose more information about their programs.

However, confidentiality agreements and the likelihood of retaliation by employers discourage workers from publicly voicing their concerns. Per the letter:

ā€œSome of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues.ā€

Whistleblowers are protected by law in the U.S. but those speaking out about the dangers of artificial intelligence are not covered because the technology is not yet regulated.

See also  AI returns as a leading narrative with top projects in the green

Also read: ChatGPT Still Spreads Falsehoods, Says EU Data Watchdog

The letter called on AI companies to facilitate verifiably anonymous feedback, support a culture of open criticism and not retaliate against whistleblowers.

AI researcher Lance B. Eliot said firms take a carrot-and-stick approach to criticism. Employees who do not speak out against risks are rewarded with job promotions and pay raises.

On the other hand, whistleblowers and critics lose their stock options and may be forced out of the company and silently blacklisted by the AI leadership community.


Cryptopolitan Reporting by Jeffrey Gogo

From Zero to Web3 Pro: Your 90-Day Career Launch Plan

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Cryptopolitan
Subscribe to CryptoPolitan