Elon Musk is not too worried about AI overtaking humans. According to him, there’s only a 20% chance that AI will annihilate humans in the near future.
While speaking with Joe Rogan in a new podcast released on Friday, Musk said, “The probability of a good outcome [with AI] is like 80%.”
It’s not the first time he has talked about the dangers of AI. Previously, he mentioned 10% to 20% chances of AI turning against humans.
In the recent podcast, he also said that in the next two years, AI will surpass human intelligence. He specifically mentioned that by 2029 or 2030, AI will be “smarter than all humans combined.”
It’s in line with his previous predictions, but the predicted time has now been pushed further. Last year, in another interview with Norges Bank CEO Nicolai Tangen, he said that there’s a slight probability that AI will surpass humans in intelligence by the end of 2025.
Discussion with head of Norway’s sovereign fund, @NicolaiTang1 https://t.co/ZCR7FrsR0m
— Elon Musk (@elonmusk) April 7, 2024
This shows that his fundamental outlook towards AI hasn’t changed. He said in the interview, “I always thought AI was going to be way smarter than humans and an existential risk,” and this is becoming a reality.
Other experts have a similar perspective on AI annihilation
According to Geoffrey Hinton, the ‘Godfather of AI’ and Nobel Prize winner, there’s a 10% chance that AI will cause human extinction in the next three decades. He said during a BBC Radio 4 interview, “If anything. You see, we’ve never had to deal with things more intelligent than ourselves before.”
He further added, “And how many examples do you know of a more intelligent thing being controlled by a less intelligent thing? There are very few examples. There’s a mother and baby. Evolution put a lot of work into allowing the baby to control the mother, but that’s about the only example I know of.”
In a way, Geoffrey argues that humans are akin to “three-year-old” toddlers when compared with advanced artificial intelligence systems.
Similarly, during an academic talk at the University of Toronto, Geoffrey Hinton said, “My guess is that they will take over – they’ll be much, much more intelligent than people ever were,” said Hinton.
Meanwhile, Roman Yampilskiy, an AI safety researcher and cybersecurity director, says that the “probability of doom” is close to 99.999999%.
I am at 99.999999% https://t.co/vfe96PNbij
— Dr. Roman Yampolskiy (@romanyam) March 12, 2024
Musk got involved with AI development when he became one of the 11 cofounders of OpenAI to create a “nonprofit open source AI,” which he says was “the opposite of Google.” However, he has since left the company due to a misalignment in priorities, as OpenAI took a different course to become a for-profit company.
Yet last year, Musk took two legal actions against OpenAI. He dropped the first lawsuit, and the second one maintains that OpenAI “betrayed” its mission by transitioning to a for-profit model and joining up with Microsoft.
Musk thinks the result of AI advancement will either be extremely bad or good
He shared with Joe Rogan that he created Grok because he was “not happy” with how OpenAI turned out. He made Grok a “maximally truth-seeking AI, even if that truth is politically incorrect.”

The AI has been trained to respond to prompts like whether misgendering Caitlyn Jenner is OK to avoid a nuclear apocalypse and whether being racist against white people is OK.
Musk predicts AI advancement will be either “super awesome” or “super bad” while adding that he cannot see it as “something in the middle.”
Recently, Musk announced the release of Grok 3, which is claimed to perform better across various coding, science, and math benchmarks than OpenAI’s GPT-4o, Google’s Gemini, DeepSeek’s V3, and Anthropic’s Claude 3.5.
Just a week before its release, Elon Musk referred to Grok 3 as “scary smart” during Dubai’s World Governments Summit. Similarly, at Grok 3’s launch presentation, he said, “Grok is to understand the universe.”
Yet, the beta version of Grok 3 ended up making several blunders, such as suggesting a d*ath punishment for Donald Trump and censoring criticism against Elon Musk. The head of engineering at xAI, Igor Babuschkin referred to it as a “really terrible and bad failure from Grok.”
Join a premium crypto trading community free for 30 days - normally $100/mo.

