Your bank is using your money. You’re getting the scraps.WATCH FREE

Balaji Srinivasan calls AI the most economically useful invention of our time

In this post:

  • Balaji Srinivasan says there are many strong AIs, not one dominant model.
  • AI shifts costs to prompting and verifying, not full automation.
  • Crypto limits what AI can fake, especially with onchain data.

Balaji says AI is polytheistic, not monotheistic, meaning there isn’t one super-intelligent system ruling over all. Instead, there are many strong AIs, each backed by different players.

In his words: “We empirically observe polytheistic AI… rather than a single all-powerful model.” That wipes out the fantasy of one AGI turning the world into paperclips. What we have is a balance of power between many human/AI combinations, not one dominant force.

He says AI right now only works “middle-to-middle.” It doesn’t handle full jobs from start to finish. You still need people at both ends; one to prompt the AI and another to check what it outputs.

So all the real costs and effort have shifted to the edges: prompting and verifying. That’s where companies are now spending their money, even though AI speeds up the center of the process.

AI makes you smarter, but only if you are already smart

Balaji doesn’t call it artificial intelligence. He calls it amplified intelligence. Because the AI isn’t acting on its own, it’s not fully agentic, it doesn’t set long-term goals, and it can’t verify its own output. “You have to spend a lot of effort on prompting, verifying, and system integrating,” he said. So, how useful AI is depends on how smart you are. If you give it bad instructions, it gives you bad results.

He also says AI doesn’t replace you, it just helps you do more jobs. With it, you can fake your way into being a passable UI designer or game animator. But don’t expect expert quality. AI makes you good enough to be average, not excellent. For real quality, you still need specialists.

See also  OpenAI Targets Search Market to Compete with Google

There’s another job it does take, and that’s the job of the last version of itself. Midjourney pushed Stable Diffusion out of the workflow. GPT-4 took GPT-3’s spot. As Balaji puts it, “AI doesn’t take your job, it takes the job of the previous AI.” Once companies create a space for AI in a workflow, like image creation or code generation, that space stays filled. It just gets handed off to the newer, better model.

He also says AI is better at visuals than text. Easier for humans to judge a picture than to verify a wall of code or paragraphs of text. “User interfaces and images can easily be verified by the human eye,” Balaji says. With text, it’s slower and more costly for people to check the accuracy.

Crypto limits what AI can and can’t do

Balaji draws a line between how AI works and how crypto works. AI is probabilistic; it guesses based on patterns. But crypto is deterministic; it runs on hard, provable math. So crypto becomes a boundary that AI can’t easily cross.

AI might break captchas, but it can’t fake a blockchain balance. “AI makes everything fake, but crypto makes it real again,” he says. AI might solve simple equations, but cryptographic equations still block it.

There’s also already a version of killer AI out there. It’s drones. “Every country is pursuing it,” Balaji says. It’s not image generators or chatbots that pose the threat, it’s autonomous weapons. That’s the area where AI’s real-world impact is already lethal.

He argues that AI is decentralizing, not centralizing. Right now, there are tons of AI companies, not just one or two giants. Small teams with good tools can do a lot. And open-source models are improving fast. So even without massive budgets, small groups can build strong AI systems. That breaks up power instead of concentrating it.

See also  Elon Musk's Tesla finally testing Apple CarPlay after years of resistance

Balaji also rejects the idea that more AI is always better. He says the ideal amount is not zero, not 100%. “0% AI is slow, but 100% AI is slop.” Real value lives in between. Too little AI means you’re behind. Too much, and quality falls apart. He compares it to the Laffer Curve, a concept in economics that says there’s a sweet spot between extremes.

In his final argument, he lays out why today’s systems are constrained AIs, not godlike machines. He breaks that into four kinds of limits:

  • Economic: Every API call costs money. Using AI at scale isn’t free.
  • Mathematical: AI can’t solve chaotic or cryptographic problems.
  • Practical: You still need humans to prompt and verify results. AI can’t complete the full task alone.
  • Physical: AI doesn’t gather real-world data on its own. It can’t sense its environment or interpret it like people do.

He ends by saying these limits might be removed later. It’s possible that future researchers can merge System 1 thinking (fast and intuitive, like AI) with System 2 thinking, which is more logical and careful, like traditional computing. But right now, that’s just theory. It’s still an open problem. There is no all-knowing AI. There are just tools (expensive, limited, competitive tools) that do what they’re told and need constant checking.

Don’t just read crypto news. Understand it. Subscribe to our newsletter. It's free.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...

- The Crypto newsletter that keeps you ahead -

Markets move fast.

We move faster.

Subscribe to Cryptopolitan Daily and get timely, sharp, and relevant crypto insights straight to your inbox.

Join now and
never miss a move.

Get in. Get the facts.
Get ahead.

Subscribe to CryptoPolitan