Google’s rapid deployment of generative artificial intelligence (AI) tools for public use has backfired as these tools are now being used to create fictitious news sites that siphon revenue away from advertisers. A study by NewsGuard reveals that Google’s ad service, Google Ads, is responsible for 90% of the ads found on these fraudulent websites.
The rise of fictitious news sites
NewsGuard’s investigation uncovers the alarming proliferation of fictitious news sites produced by generative AI tools. These websites often generate a high volume of content across various topics, using repetitive language that characterizes AI-generated texts. They sport generic names, many incorporating the word ‘news’, and while some content is fake, it is not inherently misleading. Some articles are rewrites of original stories from reputable sources.
The role of programmatic advertising
Programmatic advertising, a popular form of targeted advertising, plays a central role in the placement of ads on these fictitious news sites. In this approach, advertisers rely on automated systems to follow internet users and place ads based on predefined parameters. Unfortunately, this strategy often leads to ads being unknowingly placed on fraudulent sites, as advertisers have limited control over where their ads appear. The cost of this service ranges from a dollar to $5 for every thousand ad impressions (CPM).
The escalation of fraud
The emergence of free-to-use generative AI tools, such as OpenAI’s ChatGPT and Google’s Bard, has facilitated the rapid expansion of content farms and their ability to create vast amounts of content across multiple sites. This escalation has resulted in a surge of new fictitious news sites discovered in the USA, France, Italy, and Germany. These sites produce approximately 1,200 new “articles” daily, all authored by AI bots. The study found 217 such sites in 13 languages over a short period.
Challenges in identifying fictitious sites
Detecting and identifying these fictitious sites is challenging. The study relied on automatic textual searches of error messages from AI chatbots to find evidence of AI-generated content. This methodology has its limitations, as some of the sites were created and operated without human supervision. As a result, AI-generated content can appear alongside AI error messages, leading to convoluted and nonsensical articles.
Ad brands unwittingly advertise on fictitious sites
NewsGuard chose not to name the advertising brands in the study, but it was noted that they include prominent blue-chip companies. These brands and their ad agencies likely had no idea that their ads were appearing on unreliable, AI-driven sites. The lack of awareness poses reputational risks and undermines the credibility of trusted platforms as they compete for user attention against these fraudulent sites.
Google responded to the findings, emphasizing their focus on content quality rather than its method of creation. They claim to block or remove ads when violations are detected. However, the study’s results indicate that Google’s ad service was responsible for the majority of ads placed on these fictitious news sites, suggesting the need for better scrutiny and prevention measures.
Google’s generative AI tools, intended to benefit the masses, have been turned against the company, creating fictitious news sites that deceive advertisers and pollute the web. The study by NewsGuard reveals the alarming prevalence of fraudulent content produced by AI, leading to unwitting advertisers promoting their brands on unreliable platforms. To safeguard the online advertising ecosystem, digital advertising companies like Google must bolster their efforts in identifying and mitigating these deceptive practices. Only by doing so can they protect their advertisers and maintain trust in the credibility of the web.