๐Ÿ”ด Live Crypto Market Breakdown โ€“ Market Intelligence Live WATCH NOW

Has Safety Taken a Back Seat at OpenAI?

In this post:

  • Jan Leike, OpenAIโ€™s Superalignment Team Lead, has raised serious questions about the company’s changing safety practices.
  • Leike left the company hours after co-founder Ilya Sutskever left the company.
  • The Superalignment team is said to be dismantled by the company, leaving its AI models to lesser scrutiny.

After co-founder Ilya Sutskever left the firm earlier this week, Jan Leike, a prominent researcher, announced on Friday morning that โ€œsafety culture and processes have taken a backseat to shiny productsโ€ at the company.

Jan Leike said in a series of posts on the social media platform X that he joined the San Francisco-based startup because he believed it would be the best place to conduct AI research.ย 

Leike led OpenAIโ€™s โ€œSuperalignmentโ€ team with a co-founder who also quit this week.

The Superalignment Team at OpenAI Is No More Intact

Leikeโ€™s Superalignment team was formed last July at OpenAI to address the main technical challenges in deploying safety measures as the company advances AI that can rationalize like a human.ย 

Leikeโ€™s statements came after a report by WIRED Chich claimed that OpenAI had dissolved the Superslignment team completely, which was tasked with addressing long-term risks associated with AI.

Also read: Openaiโ€™s chief scientist, Ilya Sutskever, bids farewell

Sutskever and Leike were not the only employees who left the company. At least five more of the most safety-conscious workers at OpenAI have left the company or been dismissed since last November, when the board attempted to dismiss CEO Sam Altman only to watch him play his cards to reclaim his position.

OpenAI Should Become a Safety-First AGI Company

Leike pointed to the most contentious feature of the technology across several platformsโ€”a prospective picture of robots that are either as generally intelligent as humans or at least capable of performing many tasks just as wellโ€”writing that OpenAI needs to transform into a safety-first AGI company.

See also  Microsoft stock falls 3% on AI quotaโ€‘cut news and lawsuit accusing it of 'aiding' Israel

In response to Leikeโ€™s posts, Sam Altman, CEO of Open AI, expressed his gratitude for Leikeโ€™s services to the company and expressed his sadness at his departure.

Altman said in an X post that Leike is correct and that he would write a lengthy post on the topic in the coming days. He also said that,

โ€œWe have a lot more to do; we are committed to doing it.โ€ย 

Leike has left OpenAIโ€™s superalignment team, and John Schulman, a co-founder of the business, has taken over.

But thereโ€™s a hollowness about the team. Also, Schulman is already overburdened by his full-time work guaranteeing the security of OpenAIโ€™s existing products. How much more significant, future-focused safety work is it possible for OpenAI to produce? There seems to be no satisfactory answer to this.

Jan Leike Has Ideological Differences With Management

As the organizationโ€™s name, OpenAI, already suggests, originally intended to share its models freely with the public, the business now says that making such potent models available to anyone could be harmful, thus the models have been turned into proprietary knowledge.

Leike said in a post that he disagreed with OpenAI leadership about the priorities they have been enforcing on the company for quite some time, until they finally reached a tipping point.

See also  Baidu takes on Nvidia with $3 billion chip unit IPO in Hong Kong

Leikeโ€™s last day at the company was Thursday, after which he resigned and didnโ€™t sugarcoat his resignation with any warm send-offs or any hint of confidence in OpenAI leadership. On X, he posted, โ€œI resigned.โ€

A follower of Leike said in a comment that he is delighted Leike is no longer a member of their squad. The woke ideologies are not in line with humanity. The less aligned it gets, the more it is put into AI.ย 

Also read: OpennAI secure Reddit content for ChatGPT improvement

The follower said that he would also require a definition of alignment from all aligners. He was pointing to a recommendation in which Leike asked other employees of OpenAI that he believes they can ship the cultural change that is needed at the company.ย 

The worldโ€™s leading AI company seems to be changing its course regarding safety measures often stressed by experts, and the departure of top safety experts from the company likely confirms it.


Cryptopolitan reporting by Aamir Sheikh

Get seen where it counts. Advertise in Cryptopolitan Research and reach cryptoโ€™s sharpest investors and builders.

Share link:

Disclaimer.ย The information provided is not trading advice.ย Cryptopolitan.comย holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...

- The Crypto newsletter that keeps you ahead -

Markets move fast.

We move faster.

Subscribe to Cryptopolitan Daily and get timely, sharp, and relevant crypto insights straight to your inbox.

Join now and
never miss a move.

Get in. Get the facts.
Get ahead.

Subscribe to CryptoPolitan