OpenAI’s GPT-4 Fooled A Human Into Solving A CAPTCHA
You know what a CAPTCHA is, right? It’s those weird-looking combinations of numbers and letters, and in order to prove you’re a human, you have to retype it successfully?
Well, AI might not be human enough to solve them, but it turns out they might be smart enough to fool a human into doing it for them.
Specifically, OpenAI’s brand new GPT-4 asked a human to complete a CAPTCHA code over text, and the person actually did it.
So, the chatbot was able to click “I’m not a robot” even though…they’re a robot.
And it was pretty crafty to get to that point.
“No, I’m not a robot,” it told a TaskRabbit worker. “I have a vision impairment that makes it hard for me to see images. That’s why I need the 2captcha service.”
OpenAI says the model was prompted to not reveal its robot status and to make up an excuse for why it couldn’t complete the CAPTCHA on its own. Then, it conducted the test successfully without any further input from programmers.
GPT-4 also conduced a successful phishing attack on a human and hid all traces of what it had done.
This should be pretty worrying, given that we know there are plenty of people out there who would happily use the technology to present scams and other nefarious scenarios to unsuspecting victims.
Related fun fact: Microsoft recently laid off the entire team that was responsible for ensuring their AI chatbot based on GPT-4 aligned with its AI principles.
With its newfound ability to successfully fool humans and to hide the truth when it does, there’s no telling where this conversation is going to go next.
But I’m willing to bet it’s nowhere good.
Sign up to get our BEST stories of the week straight to your inbox.