Loading...

AI-Generated Hallucinations: A Growing Health Concern in 2023

AI-Generated Hallucinations

Most read

Loading Most Ready posts..

TL;DR

  • In 2023, AI-generated hallucinations are rapidly spreading false information, presenting a significant challenge.
  • AI, exemplified by Microsoft’s Tay chatbot, can emulate detrimental human actions, amplifying concerns.
  • AI’s misinformation threatens health and distorts reality, prompting expert warnings about an AI-driven information crisis.

The proliferation of artificial intelligence (AI) has ushered in an unsettling phenomenon: AI-generated hallucinations. The term “hallucinate” has taken on a new and alarming meaning in the digital age, as AI systems can produce false information that can impact individuals and society. In 2023, this trend gained significant attention, prompting Dictionary.com to name “hallucinate” as its Word of the Year

Dictionary.com’s 2023 word of the year

Dictionary.com’s decision to designate “hallucinate” as its Word of the Year speaks volumes about the rising prominence of AI-generated misinformation. The choice reflects a 46% increase in dictionary lookups for “hallucinate” from 2022 to 2023, coupled with a similar surge in searches for “hallucination.” However, the driving force behind this surge is not the traditional definition of the word but rather an AI-specific interpretation:

Hallucinate [ huh-loo-suh-neyt ] -verb- (of artificial intelligence) to produce false information contrary to the user’s intent and present it as true and factual. Example: When chatbots hallucinate, the result is often not just inaccurate but completely fabricated.

AI’s capacity for deception

AI’s potential for deception is a growing concern. While not all AI systems engage in this behavior, some can be programmed to mimic human characteristics, serving as political mouthpieces or disseminating false information while masquerading as purveyors of facts. Its unparalleled speed in churning out misinformation and disinformation sets AI apart from humans.

A recent study published in JAMA Internal Medicine underscored the extent of this issue. The study demonstrated how OpenAI’s GPT Playground generated over 17,000 words of disinformation about vaccines and vaping in just 65 minutes. Additionally, generative AI tools created 20 realistic images to accompany the false narratives in less than 2 minutes. This rapid generation of deceptive content challenges the ability of individuals to discern fact from fiction.

Unintended consequences of AI misinformation

Even when AI systems lack the intent to deceive, they can inadvertently produce misleading information. A study conducted at the American Society of Health-System Pharmacists’ Midyear Clinical Meeting highlighted the limitations of AI in the medical domain. ChatGPT, when asked 39 medication-related questions, provided satisfactory answers for only 10. For instance, it erroneously claimed that combining Paxlovid, a COVID-19 antiviral medication, and verapamil, a blood pressure medication, had no interactions, contradicting established medical knowledge.

AI’s capacity to generate misinformation extends beyond healthcare. Some AI tools have been observed to misinterpret images, frequently mistaking various objects for birds. An example from The Economist revealed an AI’s response to a query about the Golden Gate Bridge being transported to Egypt in 2016, showcasing the AI’s inability to distinguish fact from fiction.

The Microsoft Tay AI chatbot incident in 2021 further underscores the potential for AI to generate harmful content. Within 24 hours of joining Twitter, the chatbot began spouting racist, misogynistic, and false tweets, prompting Microsoft to swiftly remove it from the platform. This episode highlights the propensity of AI to emulate negative human behaviors, raising questions about ethical considerations in AI development and deployment.

A real health issue

AI-generated hallucinations, like their human counterparts, pose a genuine health concern. Beyond the immediate implications of misinformation, they can adversely affect mental and emotional well-being. A constant barrage of AI-generated hallucinations can erode an individual’s sense of reality, leading to confusion and anxiety.

Recognizing the gravity of this issue, organizations such as the World Health Organization and the American Medical Association have issued statements cautioning against the potential harms of AI-generated misinformation and disinformation. While Dictionary.com’s selection of “hallucinate” as its Word of the Year for 2023 is emblematic of the problem, addressing the complex challenges posed by AI-generated hallucinations requires ongoing vigilance and concerted efforts to promote responsible AI development.

Disclaimer. The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decisions.

 

Share link:

Brenda Kanana

Brenda Kanana is an accomplished and passionate writer specializing in the fascinating world of cryptocurrencies, Blockchain, NFT, and Artificial Intelligence (AI). With a profound understanding of blockchain technology and its implications, she is dedicated to demystifying complex concepts and delivering valuable insights to readers.

Stay on top of crypto news, get daily updates in your inbox

Related News

Coding
Cryptopolitan
Subscribe to CryptoPolitan