Currently, Chatgpt does not repeat these terrible false claims about Holmen in outputs. A more recent update apparently has solved the problem, because “Chatgpt is now also looking for information about people, when asked who they are,” said Noyb. But because OpenAi had previously argued that it cannot correct information – it can only block information – the fake story about children's murderer is probably still included in the internal data of Chatgpt. And unless Holmen can correct it is that a violation of the AVG claims Noyb.
“Although the damage caused can be more limited if false personal data is not shared, the AVG applies to internal data as much as to shared data,” says Noyb.
OpenAI may not easily remove the data
Holmen is not the only chatgpt user who is worried that the hallucinations of the chatbot can ruin life. Months after Chatgpt was launched at the end of 2022, an Australian mayor threatened to prosecute defamation after the chatbot wrongly claimed that he was going to prison. Around the same time, Chatgpt connected a real professor of rights to a fake sexual intimidation scandal, the Washington Post reported. A few months later, a radio presenter OpenAi sued Chatgpt outputs that describe fake darkening.
In some cases, OpenAi filtered the model to prevent him from generating harmful outputs, but probably did not remove the false information from the training data, NOYB suggested. But filtering output and vomiting disclaimers are not sufficient to prevent reputation damage, NOYB lawyer for data protection, Kleeanthi Sardeli.
“Adding a disclaimer that you do not comply with the law does not let the law disappear,” Sardeli said. “AI companies can also not only 'hide' false information from users, while they still process false information internally. AI companies have to stop acting as if the AVG does not apply to them, if that is clear. If hallucinations are not stopped, people can easily get reputation damage.”