December 11, 2025 at 9:49 am

OpenAI Admits That AI Has A Tendency To Hallucinate, And It Seems To Be Getting Worse

by Kyra Piperides

ChatGPT on a phone

Pexels

You know that one friend, the one you can send photos of your outfit to, or consult on a confusing work drama to find out if you’re in the wrong, because you know they won’t lie?

For a long time, we’ve assumed that technology is like that one friend. We’ve trusted it with our most inane and embarrassing questions because of the guise of anonymity, and the fact that we know it won’t judge.

And most importantly, it will tell us the truth, without ambiguity or prejudice.

Right?

Code on a laptop with a notebook and pen beside it

Pexels

Unfortunately for the reputation of AI (and technological development in general), the technology has recently been embroiled in notorious cases in which it has been caught red-handed, right in the middle of a bare-faced lie.

In fact, in the UK, Sky News Deputy Political Editor Sam Coates actually caught the AI he was using lying – and unfortunately for the chatbot, Coats used his expertise and journalistic determination to (eventually) coax out an admission, all of which was broadcast on a special news segment.

Of course, OpenAI have been compelled to respond, with a new research paper and accompanying blog post detailing why exactly their technology has been found to have propensity to lie. Or, as OpenAI prefer to call it, ‘hallucinate’:

“Even as language models become more capable, one challenge remains stubbornly hard to fully solve: hallucinations. By this we mean instances where a model confidently generates an answer that isn’t true.”

Blue lights and computer code

Pexels

So what exactly is the cause of these hallucinatory untruths that the chatbots prefer to tell their users, instead of simply telling them the truth?

Well, OpenAI are keen to remind us that the chatbot isn’t lying. It simply isn’t telling the truth. That’s because of many reasons, and many of them point toward the people-pleasing traits that AI has been designed to employ. It wants to please you.

So instead of admitting it doesn’t know something, it comes up with an answer that could be true. But, importantly, it isn’t. It’s simply hazarding a guess, based on whatever information it could conjure up, as OpenAI go on to explain:

“Hallucinations are plausible but false statements generated by language models. They can show up in surprising ways, even for seemingly straightforward questions. For example, when we asked a widely used chatbot for the title of the PhD dissertation by Adam Tauman Kalai (an author of this paper), it confidently produced three different answers—none of them correct. When we asked for his birthday, it gave three different dates, likewise all wrong.”

An AI book and app

Pexels

Sure, there’s more chance of it being right by making a guess rather than simply admitting that it doesn’t know, as OpenAI make pains to explain to their users. But that’s not what’s important.

The important fact is that the chatbot is not explaining that it is guessing – rather, it is presenting its guess with absolute confidence, leading the uncertain user to simply believe that these outright lies are the truth.

Sorry, ‘hallucinations’.

And in a world of increasing levels of deepfakes, fake news, and outright gibberish and lies from the people we should be able to trust, these ‘hallucinations’ could be among the most damaging mistruths of all.

If you thought that was interesting, you might like to read a story that reveals Earth’s priciest precious metal isn’t gold or platinum and costs over $10,000 an ounce!