🔥 Land A High Paying Web3 Job In 90 Days LEARN MORE

Google analyst warns AI answers ‘not perfect, can’t replace your brain’

In this post:

  • Google analyst Gary Illyes warned that LLMs still have gaps in accuracy.
  • The models need a human eye to verify the content they produce.
  • He said people shouldn’t trust AI responses without checking authoritative sources.

Google analyst Gary Illyes warned that large language models – the tech behind generative AI chatbots like ChatGPT – still have gaps in accuracy and need a human eye to verify the content they produce. The comments come just days after OpenAI launched SearchGPT, a new AI-powered search engine that will compete directly with Google. 

Illyes shared the comments on LinkedIn in response to a question he got in his inbox, but did not say what the question was. He said people shouldn’t trust AI responses without checking other authoritative sources. OpenAI aims for its search tool to upend Google’s dominance in the search engine market.

AI responses are not ‘necessarily factually correct’

Illyes, who has been with Google for over a decade, said while AI answers may be close to the fact, they are not “necessarily factually correct”. That is because large language models (LLMs) are not immune from feeding off wrong information floating on the internet, he explained.

“Based on their training data LLMs find the most suitable words, phrases, and sentences that align with a prompt’s context and meaning,” Illyes wrote. “This allows them to generate relevant and coherent responses. But not necessarily factually correct ones.”

The Google analyst said users will still need to validate AI answers based on what “you know about the topic you asked the LLM or on additional reading on resources that are authoritative for your query.”

See also  Jack Ma re-emerges to give Ant's AI-driven blueprint for the future

One way developers have tried to ensure the reliability of AI-generated content is through a practice called “grounding.” The process involves the infusion of machine-created information with human elements to guard against error. According to Illyes, grounding may still not be enough.

“Grounding can help create more factually correct responses, but it’s not perfect; it doesn’t replace your brain,” he said. “The internet is full of intended and unintended misinformation, and you wouldn’t believe everything you read online, so why would you LLM responses?”

Elon Musk accuses Google of gatekeeping public info

Traditionally, trust has always been an issue with search engines like Google and other artificial intelligence platforms and how they exert some control over the information they remit to users.

One such incident involves the recent assassination attempt of former U.S. President Donald Trump. Elon Musk suggested that Google banned the shooting incident from appearing in its search results, sparking a major debate on social media about the reach of Big Tech.

In the flurry of responses, a spoof account purporting to belong to Google vice president Ana Mostarac added to the debate, sharing a fake apology from the company for allegedly blocking content on Trump.

See also  Nvidia shares tumble as China launches antitrust investigations

“…People’s information needs continue to grow, so we’ll keep evolving and improving Search. However, it seems we need to recalibrate what we mean by accurate. What is accurate is subjective, and the pursuit of accuracy can get in the way of getting things done,” the fake account posted on X.

“You can be assured that our team is working hard to ensure that we don’t let our reverence for accuracy be a distraction that gets in the way of our vision for the future,” it added.

The community notes on X immediately flagged the tweet, saying the person was impersonating the Google VP.  This is an example of how information can easily be distorted, and AI models may not be able to discern between what is accurate or not without human review.

A Step-By-Step System To Launching Your Web3 Career and Landing High-Paying Crypto Jobs in 90 Days.

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Cryptopolitan
Subscribe to CryptoPolitan