Google is scaling down the presentation of AI Overviews in some search results. The decision came after the search giant made some publicized mistakes that created a public backlash on the internet.
On Thursday, Liz Reid, the Head of Google Search, admitted that the company was limiting AI-generated summaries called โAI Overviews.โ The AI-powered feature was criticized when it started showing content telling people to eat rocks and glue cheese to their pizza.
Also read: Google, OpenAI, and 13 Others Pledge Not to Deploy Risky AI Models
Google began showing AI Overviews to its users in the United States two weeks ago after its annual I/O event, during which it rolled out several AI-based offerings. From Thursday onwards, users noticed fewer questions, which resulted in an AI response.
Identifying Fake AI Overviews Requires Careful Analysis
Reid wrote in a blog post that people on social media shared screenshots of AI Overviews that suggested weird solutions. She said the companyโs AI made some mistakes, but many were fake and never generated by Googleโs AI system.
She admitted that some odd or inaccurate AI Overviews certainly did show up. It highlighted the areas where the feature needs to be improved, but they are not commonly searched queries. Noting the fake ones, she said,
โSeparately, there have been a large number of faked screenshots shared widely. Some of these faked results have been obvious and silly.โ
Reid said Google did not show any AI Overviews for topics like smoking during pregnancy or leaving dogs in cars. She encouraged users who saw those screenshots on social media to search themselves and check.
Googleโs AI Can Now Identify Satirical Content
Google identified that satirical content and nonsensical queries were areas that needed to be addressed for producing inaccurate AI Overviews. Google said that before the screenshots of the query โHow many rocks should I eat?โ went viral, no one asked the question. Reid said that Google Trends can be seen for confirmation.
Also Read: Opera Integrates Googleโs Gemini Models to Enhance Aria Browser AI
Another factor is that very little content is available on a specific topic, also known as data void, which is also a reason for inaccurate AI overviews. Regarding the rocks-eating query, Reid said that the satirical nature of the content triggered the AI Overview. The content was also republished by a geological software website, which the AI Overview linked to. She clarified,
โSo when someone put that question into Search, an AI Overview appeared that faithfully linked to one of the only websites that tackled the question.โ
Reid did not mention the other AI Overviews making rounds on social media, and many big publishers also published reports on them. For Example, The Washington Post published a report on Google telling people that Barack Obama was Muslim. The publication also reported another instance in which Google AI Overview told a user that people should drink urine to help pass kidney stones.
Google Has an Obsession With Public Forums
Googleโs head of search noted that the company tested the feature extensively before launching it to the public. She also admitted that real-world usage practices are difficult to simulate in the testing phase, but the company did robust red-teaming and sampling of typical user queries.ย
Also read: Googleโs AI Overview Feature Faces Backlash Over Inaccurate Results
Some users and internet content analysts are noting that Google recently began showing many results from public forums like Reddit. Some question the companyโs deal with Reddit, but a Google spokesperson told BBC that no such terms were part of the agreement to give Reddit more visibility in search results.
Lily Ray, the vice president of SEO strategy and research at Amsive, a marketing agency, says that now, many queries show results from Reddit. She also mocked Google in an X post (previously Twitter), saying all the worldโs knowledge (Reddit content) is combined with AI.
Ladies and gentlemen, all the worldโs knowledge, combined with the power of AI and one Reddit troll. pic.twitter.com/jhzT2pGXjh
— Lily Ray ๐ (@lilyraynyc) May 21, 2024
Reid said that AI Overviews feature sarcastic content from discussion forums in some examples. She said that forums are a good source of first-hand information, but in some cases, they can trigger not-so-helpful answers, such as gluing cheese to pizza.
Googleโs AI also misinterprets language, which leads to wrong information in AI Overviews, but the ratio for such answers is very low, according to the company. Reid noted that they sorted out the issues quickly through improvement or by removing responses that didnโt comply with Googleโs policy.
Google Searchโs VP also said strong guardrails exist for topics like health and news. The company has launched new triggering refinements for enhanced quality protections for health. Reid said that the detection system is also updated for nonsensical queries so as not to show them as AI overviews. She also hinted at the confirmation of fewer AI Overviews for satirical queries, saying, โWe updated our systems to limit the use of user-generated content in responses that could offer misleading advice.โ
Cryptopolitan reporting by Aamir Sheikh
Land a High-Paying Web3 Job in 90 Days: The Ultimate Roadmap