Several current and former employees of OpenAI and Google DeepMind accused their companies of hiding AI risks that could potentially trigger human extinction. In an open letter, the workers alleged that AI firms are putting profit first while avoiding āeffective [governmental] oversight.ā
Also read: OpenAI Lines up GPT-4 Successor Amid Safety Concerns
The open letter was signed by 11 former employees of OpenAI, two from Google DeepMind, and endorsed by the āgodfather of AIā Geoffrey Hinton, formerly of Google. It said profit motives and regulatory loopholes allow companies to cover up threats posed by advanced artificial intelligence.
AI could lead to āhuman extinctionā
According to the letter, AI firms such as OpenAI and Google DeepMind, creator of Gemini, have not publicly shared information about the inadequate safeguards and risk levels of their systems because they are not required to do so.
Without regulatory oversight, the AI programs could cause major harm to humans. The employees warned:
āThese risks range from the further entrenchment of existing inequalitiesā¦ to the loss of control of autonomous AI systems potentially resulting in human extinction.ā
As noted by the letterās signees, AI firms themselves have acknowledged these risks. In May 2023, the CEOs of OpenAI, Anthropic and Google DeepMind co-signed an open letter by the Center for AI Safety. The letter simply read, āMitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks, such as pandemics and nuclear war.ā
AI companies have increasingly come under scrutiny for placing novel products and profit over safety and security. ChatGPT-maker OpenAI noted the concerns raised in the letter and said it is working to develop artificial intelligence that is safe.
āWeāre proud of our track record providing the most capable and safest A.I. systems and believe in our scientific approach to addressing risk,ā OpenAI spokeswoman Lindsey Held told the New York Times.
āWe agree that rigorous debate is crucial given the significance of this technology, and weāll continue to engage with governments, civil society and other communities around the world.ā
Whistleblowers fear retaliation
The former OpenAI and Google employees said they would like to fill the oversight role while regulators work out laws that compel AI developers to publicly disclose more information about their programs.
However, confidentiality agreements and the likelihood of retaliation by employers discourage workers from publicly voicing their concerns. Per the letter:
āSome of us reasonably fear various forms of retaliation, given the history of such cases across the industry. We are not the first to encounter or speak about these issues.ā
Whistleblowers are protected by law in the U.S. but those speaking out about the dangers of artificial intelligence are not covered because the technology is not yet regulated.
Also read: ChatGPT Still Spreads Falsehoods, Says EU Data Watchdog
The letter called on AI companies to facilitate verifiably anonymous feedback, support a culture of open criticism and not retaliate against whistleblowers.
AI researcher Lance B. Eliot said firms take a carrot-and-stick approach to criticism. Employees who do not speak out against risks are rewarded with job promotions and pay raises.
On the other hand, whistleblowers and critics lose their stock options and may be forced out of the company and silently blacklisted by the AI leadership community.
Cryptopolitan Reporting by Jeffrey Gogo
From Zero to Web3 Pro: Your 90-Day Career Launch Plan