AI Struggles with Basic Reasoning


Most read

Loading Most Ready posts..


  • AI predicts but can’t simulate like humans, per Max Bennett.
  • Bennett highlights AI’s struggle with real-world tasks, exposing gaps. 
  • Simulated understanding key for AI advancement, Bennett asserts.

Renowned AI entrepreneur Max Bennett recently brought attention to the limitations of even the most advanced AI language models, citing examples where they falter in tasks that require basic reasoning. In his book A Brief History of Intelligence: Evolution, AI, and the Five Breakthroughs That Made Our Brains, Bennett emphasizes the disparities between the predictive nature of AI systems like GPT-3 and the multi-dimensional cognitive abilities innate to the human brain.

Bennett’s analysis delves into the fundamental mechanism behind the operation of AI language models, highlighting their predictive function. Drawing parallels between the predictive nature of GPT-3 and the human brain’s language processing, Bennett underscores the pivotal role of simulation in human cognition.

The gap between simulation and prediction

In an eye-opening revelation, Bennett shared the outcomes of his experiment where GPT-3 struggled to handle basic real-world scenarios, displaying a clear lack of common-sense reasoning. While the model succeeded in predicting the next word in a sequence, it stumbled when confronted with questions requiring simulated understanding. Tasks such as solving basic algebraic equations or visualizing real-world scenarios proved to be a stumbling block for the AI model.

Through his analysis, Bennett underscores the foundational role of simulation in human language and cognition. He emphasizes how humans form connections between symbols and their inner simulations, a mechanism absent in the AI learning process. With poignant examples, Bennett highlights the crucial distinction between the way humans learn and utilize language compared to AI language models like GPT-3.

Bennett’s analysis serves as a stark reminder that despite their incredible advancements, AI language models lack the fundamental capacity for cognitive leaps and intuitive reasoning present in the human brain.

Implications for AI development and human understanding

Bennett’s insights have far-reaching implications for the development of AI technologies, advocating for a deeper understanding of human cognitive processes to bridge the gap between AI capabilities and human intelligence. The analysis encourages a reevaluation of AI learning methods, urging researchers to consider incorporating simulated understanding into the framework of AI systems.

With an increasingly AI-dependent world, Bennett’s exploration of the limitations of AI language models provides a critical perspective on the importance of understanding the intricacies of human cognition for the development of future AI technologies.

In a world where AI continues to reshape the boundaries of human achievement, Bennett’s analysis offers a sobering reminder of the intricacies of human intelligence that remain unparalleled in the realm of artificial intelligence. As the quest for smarter and more capable AI systems continues, Bennett’s work sheds light on the critical need for a comprehensive understanding of the nuances of human cognition in the development of future AI technologies.

Disclaimer: The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decision.

Share link:

Derrick Clinton

Derrick is a freelance writer with an interest in blockchain and cryptocurrency. He works mostly on crypto projects' problems and solutions, offering a market outlook for investments. He applies his analytical talents to theses.

Stay on top of crypto news, get daily updates in your inbox

Related News

Subscribe to CryptoPolitan