Microsoft CTO defends AI scaling laws

- Microsoft CTO Kevin Scott affirms his belief in LLM scaling laws.
- Scott says that he is optimistic about future breakthroughs in AI capabilities.
- Microsoft’s stance aligns with its heavy investment in AI.
In a recent interview on Sequoia Capital’s Training Data podcast, Microsoft CTO Kevin Scott reiterated his belief in the enduring value of large language model (LLM) scaling laws.
Also Read: AI apps to feature ‘safety labels’ highlighting risks and testing
Scott, who played a big role in the $13 billion deal between Microsoft and OpenAI, remains optimistic about the potential for continued advancements. He responded to the ongoing debate by stating that the idea of scaling laws is still relevant to the field of AI.
OpenAI research supports scaling benefits
The LLM scaling laws proposed by the OpenAI researchers in 2020 state that the efficiency of language models increases in a proportional manner with the size of the model. Scott discredited the law of diminishing returns, saying that it is still possible to achieve exponential growth, but it may take the next generations of supercomputers to realize it.
While some scholars question the sustainability of scaling laws, OpenAI still relies on them as a key component of its AI plan. Scott’s remarks are in line with Microsoft’s approach to these principles, which implies that the technology giant has no intention of stopping the development of bigger AI models.
AI community debates future model improvements
Scott’s stance is quite opposite to some of the AI critics who think that the growth has stopped at GPT-4 and other similar models. Some critics have cited the latest models, including Google’s Gemini 1. 5 Pro and Anthropic’s Claude Opus, for not having shown significant improvements over previous models. AI Critic Gary Marcus highlighted this viewpoint in April, questioning the lack of significant advancements since the release of GPT-4.
But still, Scott is optimistic about the possibility of new discoveries. He also accepted the fact that there are few data points in AI but emphasized that this will no longer be an issue in the future.
“The next sample is coming, and I can’t tell you when, and I can’t predict exactly how good it’s going to be, but it will almost certainly be better at the things that are brittle right now.”
Kevin Scott
Microsoft’s large funding in OpenAI is a testament to its confidence in the future advancement of LLMs. Some of the AI features include Microsoft Copilot, which makes it evident that the company is keen on improving its use of AI. Ed Zitron, an AI critic, said that, to an extent, AI may be stuck because people expect too much from it.
Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free.
Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Brenda Kanana
Brenda is a writer with three years of experience specializing in cryptocurrency, artificial intelligence and emerging technologies. She graduated from Technical University of Mombasa with a degree in Sociology. She has worked at Zycrypto and Cryptopolitan.
CRASH COURSE
- Which cryptocurrencies can make you money
- How to boost your security with a wallet (and which ones are actually worth using)
- Little-known investment strategies that the pros use
- How to get started investing in crypto (which exchanges to use, the best crypto to buy etc)














