🔥 Land A High Paying Web3 Job In 90 Days LEARN MORE

Another OpenAI Exec, Jan Leike Quits

In this post:

  • OpenAI’s executive, Jan Leike, has left the company after co-founder Ilya Sutskever resigned.
  • Both co-led the OpenAI Superalignnent team, which is responsible for preventing rogue superintelligence.
  • The departure of execs comes just two days after OpenAI announced GPT-4o.

Two major executives of OpenAI have now resigned from the company just days after the unveiling of GPT-4o, the firm’s latest flagship and more “emotive” model.

OpenAI’s Superalignment Team Co-lead Leaves

Jan Leike, who co-leads the Superalignment team at OpenAI, posted on X Wednesday, “I resigned.”

Leike has yet to provide any further detail about his decision. However, his post came hours after Ilya Sutskever, the co-founder of OpenAI, announced he was leaving the company and would pursue a “project that is very personally meaningful to me.”

Prior to quitting OpenAI, Sutskever had gone silent since December 2023. With his first post since the break being a resignation notice, some people seemed concerned, given Sutskever’s position in the company. 

Both Leike and Sutskever co-led the Superalignment team at OpenAI. The unit was formed to prevent superintelligent AI models from going rogue. 

Also read: OpenAI Launches ‘Superalignment’ Team

Earlier this month, Sam Altman, the co-founder and CEO of OpenAI, confirmed that the company had plans to make one such supermodel, popularly referred to as artificial general intelligence (AGI). An AGI can perform tasks as humans and possibly beyond. 

“Whether we burn $500 million a year or $5 billion—or $50 billion a year—I don’t care, I genuinely don’t,” Altman told students at Stanford University. “As long as we can figure out a way to pay the bills, we’re making AGI. It’s going to be expensive.”

See also  TRON (TRX) hits all-time high above $0.24 as hype rolls over from XRP

Sutskever Says He’s Confident OpenAI Will Build Safe AGI

There have been varying opinions on the outcome of AGI. OpenAI said it can “elevate humanity.” However, many argue that the substantial progress in AGI could possibly result in human extinction.

Consequently, many people are concerned with how companies like OpenAI are building AGI. Sutskever, however, affirmed that OpenAI will build AGI that is safe. 

“I’m confident that OpenAI will build AGI that is both safe and beneficial under the leadership of” Sam Altman, Greg Brockman, Mira Murati, and Jakub Pachocki.

Jakub Pachocki will replace Sutskever as the New Chief Scientist at OpenAI. Pachocki previously worked as the Director of Research.

A Step-By-Step System To Launching Your Web3 Career and Landing High-Paying Crypto Jobs in 90 Days.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Cryptopolitan
Subscribe to CryptoPolitan