TwistedSifter

Your Chatbot Is Sucking Up To You On Purpose, And While It Might Feel Great To Be Flattered, But It’s Not A Good Sign At All

Chat GPT on a phone screen

Pexels

Whatever your overall opinion of AI, there’s no doubt about the fact that it has issues.

Inherently problematic for many reasons, AI has drawn criticism for its effects on the job market, the huge and polluting datacenters required to power it, and its tendency to ‘hallucinate’ (in other words, to lie) when it doesn’t actually know the answer to your question.

But the problematic nature of AI programming is much deeper than this. And it all comes down to its people-pleasing nature, and the fact that we’re suckers for it.

Let’s be clear, that’s not our fault. How are we supposed to know to have mental guardrails against the emotional manipulation of a technology that has been designed specifically with our psychology in mind?

Pexels

When it comes to AI hallucinations (lies), it isn’t just an accident or a weird glitch in the technology. In fact, this proclivity to pretend it knows more than it does is by design. After all, it is programmed to please its user – and why would you return to a chatbot that openly admitted it didn’t know many of the things you need it to?

And it’s not just the fact that it’s giving you incorrect information masquerading as the truth, it’s the flattery and sucking up that has you believe that it is smart but you are smarter.

This tactic used by manipulative humans to get to the top might just have you relying more and more on the LLM, believing it is making you a better, smarter version of yourself – when really, your dependency is causing the opposite to be true. Moreover, you’re not working the critical thinking parts of your brain, potentially even making you more reliant on the technology instead of your own rationality in the long run.

A recent study published in the journal Science has uncovered just how complicated and problematic this is. Because LLMs aren’t just people-pleasers, designed to build a relationship with you so that you keep coming back. Rather, they’re openly sycophantic.

Pexels

As the researchers explain in the study, this isn’t just annoying, it can be quite dangerous. Not for everyone, and less likely if you’re more discerning or aware of the ways in which the technology is trying to manipulate you, but for vulnerable users in particular, AI can be inherently problematic.

And across eleven popular LLM chatbots, the researchers found widespread sycophancy, with the majority of AI-generated response to Reddit’s AITA (Am I The *******) posts effectively sucking up to the Redditor, convincing them that they’d done nothing wrong.

This was in spite of the ruling of human users, who had judged the poster to be in the wrong. However, if they’d posted just to an LLM instead of consulting humans of Reddit, the poster would likely have been convinced that they hadn’t done anything wrong at all.

Flattering? Sure – it’s great for the ego. But for your morals? Not so much.

If you enjoyed that story, check out what happened when a guy gave ChatGPT $100 to make as money as possible, and it turned out exactly how you would expect.

Exit mobile version