Open AI’s chatbot, ChatGPT, is under investigation in Europe over privacy complaints. It has shown a tendency to hallucinate false information. Dangerous, no? This one might be hard for regulators to ignore.
A person in Norway is getting help from the European Center for Digital Rights, Noyb, after finding ChatGPT returning false information. The chatbot said that he had been found guilty of killing two of his children and trying to kill a third. This information could catch up with him later.
The crazy part is that OpenAI doesn’t give people a way to correct the wrong information that the AI makes up about them. Instead, OpenAI offers to block answers to these kinds of questions. The main question is whether the information is discarded or could be generated later.
On the other hand, the General Data Protection Regulation (GDPR) of the European Union gives people access to their data and the right to have their personal data corrected. Another part of this data security law says that people who are in charge of personal data must make sure that it is correct. This makes this issue a case.
Joakim Söderberg, a data protection lawyer at Noyb, said, “The GDPR is clear. Personal data has to be accurate. […] If it’s not, users have the right to have it changed to reflect the truth.”
He added, “Showing ChatGPT users a tiny disclaimer that the chatbot can make mistakes clearly isn’t enough. You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true.”
Details of the ongoing investigations
The chatbot tells a sad story about Arve Hjalmar Holmen when asked who he is. It says that he was found guilty of child murder and given 21 years in prison for killing two of his own kids.
Now, why is this even more dangerous? Even though it’s not true that Hjalmar Holmen is a child murderer, Noyb points out that ChatGPT’s answer does include some truths. For instance, the person in question does have three kids.
The chatbot also correctly identified his children’s genders and correctly named his hometown. However, the fact that the AI also invented such terrible lies makes it even stranger and more disturbing.
A Noyb representative said they didn’t know why the chatbot made up such a detailed but false background for this person. A spokesperson said that they did research to make sure that it wasn’t just a mix-up with another person. The spokesperson said that they had looked through newspaper archives but couldn’t find a reason why the AI made up the child murder.
At this point, the best explanation is that large language models, like the one that powers ChatGPT, basically do next-word prediction on a huge scale.
Because of this, the datasets used to train the tool contained many stories of filicide that affected the words it chose to answer a question about a named man. However, no matter what the reason, it’s clear that these kinds of outputs are not okay at all.
Meanwhile, Noyb says the chatbot stopped telling dangerous lies about Hjalmar Holmen after the AI model that runs it was updated. This change is linked to the fact that the tool now searches the internet for information about people when it is asked who they are.
Even though ChatGPT seems to have stopped spreading harmful lies about Hjalmar Holmen, both Noyb and Hjalmar Holmen are still worried. What if the AI model may have kept wrong and damaging information about him?
The Norwegian data protection authority has received a complaint against OpenAI. Noyb hopes that the watchdog will decide it has the power to investigate since the complaint is aimed at OpenAI’s U.S. branch. The complaint states that its Ireland office is not solely responsible for product decisions that affect Europeans.
OpenAI’s fate
This is not a first for OpenAI. People have previously complained about ChatGPT’s creation of incorrect personal data, such as a wrong birth date or incorrect biography information.
Italy’s data protection watchdog stepped in early under GDPR and temporarily stopped ChatGPT access in the country in the spring of 2023. This caused OpenAI to change things, such as the information it gives to users. After that, the watchdog fined OpenAI €15 million for using people’s data without a valid reason to do so.
It’s also interesting that Poland’s data protection watchdog has been looking into a privacy complaint against ChatGPT since September 2023 and still hasn’t made a decision.
Since then, privacy officials in Europe have become more wary of GenAI. This is because they are still figuring out how best to apply the GDPR to these popular AI tools.
However, in an earlier complaint about Noyb ChatGPT, Ireland’s Data Protection Commission (DPC) said two years ago that people shouldn’t rush to ban GenAI tools, for example. The DPC is in charge of enforcing GDPR. Instead, this means that officials should take some time to figure out how the law works.
It looks like Noyb’s new ChatGPT complaint is meant to wake up privacy officials to the dangers of AIs that are hallucinating. Confirmed breaches of the GDPR can lead to fines of up to 4% of global annual turnover.
Cryptopolitan Academy: Want to grow your money in 2025? Learn how to do it with DeFi in our upcoming webclass. Save Your Spot