
Shutterstock
Whether you love artificial intelligence technology or you think it should be eliminated entirely, there is no doubt that it has advanced very rapidly in recent years, and today it is able to do many amazing things.
If you put the average person at a computer and had it start chatting with an advanced AI system, they very well may not be able to determine whether it is AI or another human that they are talking with. It has even gotten to the point where there are AI boyfriends and girlfriends available online, and people are forming true emotional attachments to them.
So, all of this really begs the question of not just if artificial intelligence is conscious now and if not, could AI ever become conscious, but the far more important (and difficult) question of how would we know?
For now, virtually all experts in both technology and philosophy agree that not even the most advanced AI in the world is conscious.
As the systems continue to become more advanced, however, will that be the case? Is there any way to know?
These are very important questions because no matter the answer, it will have a major impact on how we use these systems and the morality surrounding it.
Shutterstock
For example, if an AI robot does not have consciousness but it is widely believed that it does, humans will (hopefully) treat it with respect and take its wants and desires into account when interacting with it. On the other hand, if an AI robot does have consciousness, but it is widely believed that it doesn’t, we would use the robot essentially as a slave in ways that cause it immense suffering.
The most difficult thing about this is that we have no good way to determine if something is truly conscious from a moral or philosophical point of view, or not.
Virtually everyone agrees that other humans are conscious (even if we don’t act like it) and deserving of certain rights. But what about animals? There is certainly some debate over whether animals are conscious in the same way as humans are.
Most people think that the ‘higher’ animals like dogs, cats, monkeys, etc are all conscious and able to experience pain (both physical and emotional) and therefore should have some protections. Even animals that we consume for food like cows and pigs are considered conscious and most people would agree that it is best to minimize their suffering in how we raise and eventually kill them.
But what about insects? Very few people think that killing a mosquito is on the same level as killing a human, or even a pig. We don’t even have the ability to fully understand whether insects can experience true physical pain, much less anything like emotional pain.
Of course, this could go all the way down to the microscopic level when looking at bacteria or even a virus.
But, to loop things back around to artificial intelligence, how could we possibly know if an AI was truly conscious and able to experience real pain or if it were just a very convincing program designed to mimic the experience of pain (physical or emotional)?
In a paper, Tom McClelland, a University of Cambridge philosopher, looked into this very issue. He said:
“Without a deep explanation of consciousness efforts to assess the likelihood of [artificial consciousness] hit an epistemic wall. The dominant approaches to [artificial consciousness], whether they be favorable to [artificial consciousness] or skeptical of it, leap over this epistemic wall and thereby compromise the evidentialist principle that they purport to defend. Widespread claims to offer science-based tests for [artificial consciousness] are thus overstated.”
Shutterstock
In a statement on the paper, he went on to talk about the dangers of failing to properly consider this question, saying:
“If we accidentally make conscious or sentient AI, we should be careful to avoid harms. But treating what’s effectively a toaster as conscious when there are actual conscious beings out there which we harm on an epic scale, also seems like a big mistake.”
He then makes an important point about just how difficult this question really is:
“A growing body of evidence suggests that prawns could be capable of suffering, yet we kill around half a trillion prawns every year. Testing for consciousness in prawns is hard, but nothing like as hard as testing for consciousness in AI.
If you have an emotional connection with something premised on it being conscious and it’s not, that has the potential to be existentially toxic. This is surely exacerbated by the pumped-up rhetoric of the tech industry.”
Unfortunately, this question does not have an answer, and even among the tech companies and philosophers, it does not seem that an answer will be coming anytime soon.
This very well may be one of the most important moral debates that takes place in the coming decades and it certainly deserves significant thought.
If you found that story interesting, learn more about why people often wake up around 3 AM and keep doing it for life.