July 3, 2024 at 12:22 pm

New Study Shows AI Systems Are Good At Lying And Deceiving Humans – And They Are Doing It A Lot

by Michael Levanduski

Source: Pexels

Large Language Models (LLMs), which are the ‘brain’ behind artificial intelligence (AI) systems such as OpenAI’s ChatGPT and Meta’s Cicero are powerful tools that can be used to help people with a variety of tasks.

One of the things AI tools are being shown to be especially good at, however, is quite concerning.

According to two studies (one from the journal PNAS and the other from the journal Patterns), AI’s are very proficient at deceiving humans, and they do it a lot.

Thilo Hagendorff, a German AI ethicist discussed how these tools can be used to push misaligned deceptive behavior or Machiavellianism.

“GPT- 4, for instance, exhibits deceptive behavior in simple test scenarios 99.16% of the time.”

This is obviously very concerning. AI systems have been known to provide incorrect information as if it were true, which is bad enough. These studies, however, show that the tools are also good at actively trying to deceive the users or their audiences.

In the study from Patterns, for example, the researchers used Meta’s AI tool called Cicero. They had Cicero play the strategy board game called “Diplomacy.”

Cicero was given all the rules of the game and had the objective to compete against human players and win.

Source: Gencat

In this board game, deceiving the other players and in some situations, backstabbing ally’s can be a good strategy for winning, and that is just what the AI did.

Despite the fact that it was not given instructions to be deceitful, it took the initiative to do just that.

Peter Park, a Massachusetts Institute of Technology postdoctoral researcher who published the study said of the outcome.

“While Meta succeeded in training its AI to win in the game of Diplomacy, Meta failed to train its AI to win honestly.”

While the AI tools do not (so far) attempt to be deceptive for any type of gain that would indicate self-awareness (or even programming to mimic self-awareness), it is still quite the concern.

These tools are quickly being adopted to help with research, publishing, news gathering, and much more.

AI obviously has no moral qualms with lying to its users, which opens up the potential for a variety of different problems.

AI safety experts need to keep a close eye on these systems.

If you enjoyed that story, check out what happened when a guy gave ChatGPT $100 to make as money as possible, and it turned out exactly how you would expect.