COMING SOON: A New Way to Earn Passive Income with DeFi in 2025 LEARN MORE

AI search engines send 96% less referral traffic to news sites and blogs than traditional Google search

In this post:

  • AI search engines like OpenAI and Perplexity are sending 96% less referral traffic to publishers compared to traditional Google search.
  • Publishers are taking legal action against AI companies for copyright infringement, with notable lawsuits from Chegg and others against Google and Perplexity.
  • Experts warn that the rise of AI scraping could lead to an ‘AI slurry’ that threatens the quality of internet content and the viability of publishers.

Companies like OpenAI and Perplexity promised in the past that the AI search engines their models offer would provide new sources of income for publishers by directing traffic to their sites. 

However, according to a report shared with Forbes by content licensing platform TollBit, it has been revealed that AI search engines actually send less than 96% of traffic to news sites and blogs than the usual Google search. 

In the meantime, AI developers scraping data from websites has continued to increase, adding to the frustration many publishers feel towards these tools. 

An interface of OpenAI's ChatGPT search
An interface of OpenAI’s ChatGPT search. Source: OpenAI (X/Twitter)

How AI-powered search engines are stealing the show 

To understand the conflict developing between publishers and AI search engines we need to go back to the origins. 

Search engine optimization became a huge deal when Yahoo! changed from being a pure set of directory listings in 1994 to offering search in 1995. It was the answer to internet discovery and after Google became the first good search engine in 1997, it quickly dominated the web, and became important for web users.

Google became the web’s best search engine because of its PageRank scoring mechanism, which ranked search results based on how many other websites linked to each hit.

Today, with the spread of AI-powered search engines, we have users opting to use simple AI answer agents or the more elaborate “Deep Research” style AI research tools available from several AI labs to conduct online queries. 

This growing relationship between AI search engines and content publishers challenges the  ‘social contract’ that has kept traditional search engines like Google relevant to publishers. 

Per the contract, publishers provided content in exchange for referral traffic from search engines and this formed the basis for a symbiotic relationship that has sustained the online media ecosystem for decades. 

See also  Google introduces a feature that can generate AI podcasts from Gemini’s Deep Research

However, now that we have AI-generated summaries that cut out the need for users who need direct information to visit the original sources, the dynamic has shifted significantly. 

The new development not only threatens the revenue models of content creators but also forces us to question the sustainability of quality journalism. 

Nathan Schultz, the CEO of Edtech company Chegg, thinks that it is time to “say no” explaining that breaking the longstanding contract is not right. 

It is clear AI is here to stay and in time more and more people will turn to AI agents when they have questions. One expert believes that this is because AI goes the extra mile to deliver understanding rather than just help you discover what you’re looking for. 

While this is a great feature, it has a devastating effect on sites that depend on referral traffic to survive. Take Chegg for example. It saw its traffic plummet by 49% in January year-over-year, a sharp decline from the 8% drop in the second quarter last year, when Google released AI summaries. 

The traffic decline has affected Chegg so much  that it is considering going private or getting acquired, according to what Schultz said in an earnings call.

How news publishers have been dealing with the new development 

According to TollBit’s report, AI search engines send less referral traffic compared to traditional Google searches and there has been a significant increase in web scraping by AI companies, with some scraping websites 2 million times on average in the last quarter of 2024. Unfortunately, these scrapings do not often translate into traffic for the affected sites. 

See also  Swedish movie 'Watch the Skies' to release in the US with AI dubbing

Publishers are not happy about this and have reacted mainly by taking legal actions against AI companies for intellectual property infringement. Forbes sent a cease-and-desist letter to Perplexity in June, accusing it of infringing copyright and in October, the New York Post and Dow Jones sued the company for alleged copyright infringement and attributing made up facts to media companies. 

Perplexity AI shares stats about its Deep Research AI search tool's performance
Perplexity AI shares stats about its Deep Research AI search tool’s performance. Source: Perplexity AI (X/Twitter)

At the time, Perplexity responded by saying the lawsuit reflects a posture that is “fundamentally shortsighted, unnecessary, and self-defeating.”

Earlier this month, a group of publishers including Condé Nast, Vox and The Atlantic also filed a lawsuit against enterprise AI company Cohere accusing it of allegedly scraping 4,000 copyrighted works from the internet and using them to train its suite of large language models. 

The issue is made even more complicated because AI companies don’t properly identify their web crawlers, which makes it difficult for publishers to manage access to their content. 

To deal with these challenges, some publishers have opted for content licensing deals with AI companies that ensure they are compensated for the use of their data. Others, like TollBit, have developed models to charge AI companies for scraping content. 

As things continue to evolve, legal frameworks around data protection and intellectual property will become critical battlegrounds where publisher rights will be potentially defended and expanded. 

If things are allowed to continue unchecked, analysts say we might have an era dominated by “AI slurry,” which means a situation where high-caliber content providers are forced out of business, causing significant dilution in the quality of available information.

Cryptopolitan Academy: Coming Soon - A New Way to Earn Passive Income with DeFi in 2025. Learn More

Share link:

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Editor's choice

Loading Editor's Choice articles...
Subscribe to CryptoPolitan