In this week’s featured article, we look at how a false murder claim by ChatGPT has fuelled fresh concerns over AI accuracy and hallucinations.
A Father Falsely Accused by AI
In quite a shocking story, a Norwegian man has filed a formal complaint after ChatGPT falsely claimed he murdered two of his sons! The case, now lodged with Norway’s data protection authority (Datatilsynet), is raising serious questions about AI hallucinations and how companies like OpenAI are handling personal data.
Who Is Arve Hjalmar Holmen?
Arve Hjalmar Holmen, the man at the centre of the complaint, is what you might call a private citizen. He’s not a public figure, has no criminal record, and lives with his family in Trondheim (Norway) and yet when he typed his own name into ChatGPT last year, the chatbot responded with a chilling story.
Holmen alleges that ChatGPT claimed he had been convicted of killing two of his sons, attempting to murder a third, and had been sentenced to 21 years in prison! It even mentioned the tragic event “shocked the local community and the nation”. The only issue (and quite a large one) here is that it never happened.
Scared
Holmen is reported to have said that the fact that other people could read this bizarre output and believe it is true scared him.
Although the key claim was inaccurate, it seems that the AI’s fictional story didn’t come entirely out of thin air. For example, it correctly mentioned that Holmen lives in Trondheim and has three sons. The ages in the fabricated account also eerily mirrored the real age gap between his children.
It’s been reported that Holmen has said he tried to contact OpenAI, but received only a generic response. It seems therefore that frustratedly, he turned to noyb, a European digital rights group, which has now filed an official GDPR complaint on his behalf.
The Legal Challenge
The complaint accuses OpenAI of breaching Article 5(1)(d) of the GDPR, which requires organisations to ensure that personal data is accurate and kept up to date. Noyb argues that the company should delete the output and “fine-tune” its model to prevent further harm to Holmen’s reputation.
They are also calling for the Norwegian data authority to impose a fine. In the words of Joakim Söderberg, noyb’s legal officer: “You can’t just spread false information and in the end add a small disclaimer saying that everything you said may just not be true”.
What Are AI Hallucinations?
The Holmen case is the latest example of what the industry calls a “hallucination”, i.e. when an AI system makes something up and presents it as fact.
These errors are surprisingly common in large language models like ChatGPT and Google’s Gemini. For example, just last year, Gemini suggested people glue cheese to pizza and eat rocks for health reasons!
These so-called ‘hallucinations’ are a result of how these AI models work. For example, rather than “knowing” facts, they predict the most likely next word or phrase based on patterns in vast amounts of text. This can produce convincingly written (but totally inaccurate) results.
As Professor Simone Stumpf of the University of Glasgow explains: “Even if you are involved in the development of these systems… quite often, you do not know how they actually work, why they’re coming up with this particular information.”
OpenAI’s Response
OpenAI has acknowledged the incident but says it relates to an older version of ChatGPT. The company says it has since rolled out a new model with internet search capabilities, which it says improves accuracy.
However, the original conversation remains in OpenAI’s system, and critics say more needs to be done to prevent such reputational damage in the future.
What Does This Mean For Your Business?
This story raises a pressing question, i.e. how much trust can individuals and businesses place in today’s most powerful tech platforms? We see how AI can cause very real harm through inaccuracies that feel all too plausible.
For individuals like Arve Hjalmar Holmen, the (alleged) damage caused by false AI-generated content is deeply personal, but the implications extend far beyond a single case. As generative AI becomes increasingly embedded in everything from customer service to search engines, the risks around misinformation, defamation, and lack of accountability are growing more serious. For regulators and privacy advocates, this case could become a key reference point in the broader push to bring AI development in line with data protection laws, especially in Europe, where the GDPR offers some of the strongest safeguards.