🔥 Land A High Paying Web3 Job In 90 Days LEARN MORE

Meta’s AI Chief Says It’s Too Early to Worry About AGI

In this post:

  • Meta’s AI chief believes it’s too early to worry about the existential risk of AI.
  • LeCun said AI systems cannot spontaneously become dangerous.
  • Meta and OpenAI have both revealed intents to create AGIs.

Meta’s AI chief, Yann LeCun, has said it’s early to start worrying about human-level intelligent AI systems, commonly referred to as AGI. LeCun’s comment comes as debates on the impact of AGI on humanity’s existence continue to gather steam. 

Also Read: A Closer Look at the AGI Speculation

“Right now, we don’t even have a hint of a design of a human-level intelligent system,” LeCun posted on X Monday. “So it’s too early to worry about it. And it’s way too early to regulate it to prevent ‘existential risk.’”

AI Cannot Just Become Dangerous, Says LeCun

Many experts believe AGI is decades or even centuries away from becoming a reality. However, it has been a cause for concern among governments, as experts warn that such models could threaten humanity’s existence. 

Also Read: Google, OpenAI, and 13 Others Pledge Not to Deploy Risky AI Models

LeCun believes that AI systems are not “some sort of natural phenomenon that will just emerge and become dangerous.” He said humans have the power to make AI safe because we are the ones creating it. 

“I can imagine thousands of scenarios where a turbojet goes terribly wrong. Yet we managed to make turbojets insanely reliable before deploying them widely,” LeCun added. “The question is similar for AI.”

See also  Fan tokens volumes explode, is this the next big crypto trend?

Meta’s AI Chief Says LLMs Cannot Achieve Human Intelligence

Last week, LeCun also opined that large language models, which power popular AI chatbots like ChatGPT, Gemini, etc., cannot reach human intelligence. 

Also Read: OpenAI’s ChatGPT-4o Can Show Feeling and Emotions

In an interview with Forbes, LeCun said LLMs have a “limited understanding of logic,” given that they have to be trained and can only perform as well as the data they are fed. He further noted that LLMs are “intrinsically unsafe” and that researchers looking to build human-level AI systems would need to consider other model types.

OpenAI, Meta Confirms Interest in Making AGI

ChatGPT maker OpenAI plans to create such powerful models. In May, co-founder Sam Altman said they would make AGI no matter what it costs.

“Whether we burn $500 million a year or $5 billion—or $50 billion a year—I don’t care, I genuinely don’t. As long as we can figure out a way to pay the bills, we’re making AGI. It’s going to be expensive.”

Sam Altman

Meta has also begun putting efforts toward achieving human-level intelligence. In January, Mark Zuckerberg admitted that Meta’s “long-term vision is to build general intelligence, open source it responsibly, and make it widely available so everyone can benefit.”

See also  Trump doubles down on crypto and Wall Street ambitions with more promises

 Cryptopolitan reporting by Ibiam Wayas

Land a High-Paying Web3 Job in 90 Days: The Ultimate Roadmap

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Cryptopolitan
Subscribe to CryptoPolitan