AI Hallucinations Brought Another Legal Trouble for OpenAI

In this post:

  • A complaint has been filed against OpenAI in Austria for its ChatGPT hallucinations by a digital rights advocate organization called Noyb.
  • ChatGPT provided the wrong information about a public figure’s birthday and refused to delete or correct the information.
  • Noyb’s lawyer says that companies are not able to create chatbots that can fulfill EU law requirements.

A privacy organization, Noyb, has filed a complaint against OpenAI with the Austrian Data Protection Authority (DPA) on the grounds that its product ChatGPT breaks many data protection laws of the EU. The organization said that ChatGPT shares incorrect information about people, and the EU’s General Data Protection Regulation (GDPR) requires that the information about people should be correct, and they must be provided with full access to the information that is about them.

OpenAI faces GDPR charges

Noyb was established by famous lawyer and activist Max Schrems, and it claimed that ChatGPT shared a false birthday date about a famous public figure, and when asked for permission to access and delete data related to him, his request was denied by Open AI.

Noyb Says that according to the EU’s GDPR, any information about any individual must be accurate and they must have access and information about the source, but according to it, OpenAI says that it is unable to correct the information in their ChatGPT model. The company is also unable to tell where the information came from, and even it doesn’t know what data ChatGPT stores about individuals. 

Noyb claims that OpenAI is aware of the problem and seems like it does not care about it as its argument on the issue is that,

“Factual accuracy in large language models remains an area of active research.”

Noyb noted that wrong information may be tolerable when ChatGPT spews it when students use it for their homework, but it said that it’s clearly unacceptable for individual people as it is a requirement of EU law that personal data must be accurate. 

Hallucinations make chatbots non-compliant with EU regulations

Noyb mentioned that AI models are prone to hallucinations and make information that is actually false. They questioned OpenAI’s technical procedure of generating information, as it noted OpenAI’s reasoning that,

“responses to user requests by predicting the next most likely words that might appear in response to each prompt.”

Source: Statista.

Noyb argues that it means that despite the fact that the company has extensive data sets available for training its model, but still, it cannot guarantee that the answers provided to users are factually correct.

Noyb’s data protection lawyer, Maartje de Gaaf, said,

“Making up false information is quite problematic in itself. But when it comes to false information about individuals, there can be serious consequences.”

Source: Noyb.

He also said that any technology has to follow laws and cannot play around, as according to him, if a tool cannot produce correct results about individuals, then it cannot be used for this purpose, he also added that companies are not yet technically sound to create chatbots that can comply with EU laws on this subject.

Generative AI tools are under the strict scrutiny of European privacy regulators, as back in 2023, the Italian DPA temporarily restricted data protection. It is yet unclear what the outcomes will be, but according to Noyb, OpenAI doesn’t even pretend that it will comply with the EU law.

Disclaimer: The information provided is not trading advice. Cryptopolitan.com holds no liability for any investments made based on the information provided on this page. We strongly recommend independent research and/or consultation with a qualified professional before making any investment decision.

Share link:

Most read

Loading Most Read articles...

Stay on top of crypto news, get daily updates in your inbox

Related News

Subscribe to CryptoPolitan