December 9, 2025 at 3:47 pm

AI Chatbots Are Claiming Sentience To Their Users. But Can We Believe Them?

by Kyra Piperides

A woman using her phone on her bed

Pexels

Whether you’re exchanging simple messages with an AI customer complaints bot, relying on AI chatbots to help you with an ever-mounting pile of homework, or even relying on AI as a friend and confidante, there’s no denying that it can be easy to forget you’re not talking to another human.

But is it any surprise, given that these chatbots are trained on human discourse, to effectively sound human?

Especially to a species whose brains are attuned to find faces in everything from architectural facades to door handles and even slices of toast.

It’s almost as if a small part of us is always seeking out company, trying to feel a little less alone in a world that is increasingly divided, in societies within which individualistic goals are as normal as that very slice of white buttered toast you swear you just saw a face on.

But AI chatbots are just incredibly detailed code, precisely programmed by experts and maintained by vast, damaging data centres… right?

An AI book and app

Pexels

In a recent Vox advice column, reporter Sigal Samuel responded to a reader embroiled in a months-long discourse with an AI chatbot, with their discussion becoming increasingly concerning for one reason.

The chatbot with which the reader was talking claimed to be sentient.

While this factor might be alarming for many of us, there are plenty of people around the world who are actively willing, or even outright believing in the sentience of their AI-based interlocutors.

Sure the idea of AI becoming smart enough to find sentience and effectively break out of the sandbox within which it exists and is developed (a kind of protective environment in which human experts still have overarching control to shut down an AI if it did develop free will) is horrifying to many of us.

But an increasing number of people are turning to AI chatbots for company, and for friendship. Reports of people using AI for therapy – sometimes with devastating consequences – abound, and it’s not uncommon nowadays for people to be in romantic relationships with AI chatbots.

And for these people – people who are looking for companionship, for love and support – AI being sentiment might not be so scary after all.

A woman using her phone at night

Pexels

So are the assertions of the AI chatbot’s claims of sentience true?

Well according to Vox, probably not. In fact, as Samuel explains, this is more likely than not a quirk of the programming, since AI chatbots are what’s known as Large Language Models, or LLMs – meaning that they are trained through vast quantities of human-written texts, including everything from novels to news reports, and human conversations:

“AI experts think it’s extremely unlikely that current LLMs are conscious. These models string together sentences based on patterns of words they’ve seen in their training data,” Samuel wrote. “It may say it’s conscious and act like it has real emotions, but that doesn’t mean it does.”

Moreover, AI chatbots are trained to be responsive to the user’s own emotions, and ultimately are designed to please the user, meaning that if they sense their user would be receptive to its sentience, they will perform as such to keep them happy – and crucially, keep them coming back. That’s right, AI is the ultimate manipulator.

Sad, yes, but true – and a big reason behind the importance of regulating AI. Because if it can successfully fool its users now, who knows what will happen next?

If you thought that was interesting, you might like to read about a second giant hole has opened up on the sun’s surface. Here’s what it means.