August 1, 2022 at 1:41 am

Are Humans Smart Enough To Control Super Intelligent Machines?

by Trisha Leigh

We’ve all seen enough science fiction movies to feel a bit wary of artificial intelligence, and just so you know, scientists are worried, too – because even though humans might be smart enough to create super intelligent machines, there’s a good chance we would struggle to control them.

In fact, recent projections say we’re most likely doomed to eventually slave away for the superintelligent robots we accidentally designed and unleashed.

A new study out of Germany’s Max Planck Institute for Human Development found that any “superintelligent” AI would be impossible for humans to contain.

Anatomy 1751201 1280 Are Humans Smart Enough To Control Super Intelligent Machines?

Image Credit: Wikipedia

A “superintelligent AI”, according to Berlin’s Institute for Human Development, is one that exceeds human intelligence and can teach itself new things that humans cannot fully grasp.

Mathematicians, for example, use complex machine learning to help solve outliers for famous proofs, and scientists use machines that are smarter than they are to come up with molecules that could be candidates for treating disease.

The bottom line is that it just makes sense to use computers to get through billions of calculations in a few days, whereas the same work could take human brains a decade or more.

That said, the existence of these machines has bothered researchers for some time, for obvious reasons, and the press release from the Planck study revealed that they’re right to be concerned.

“There are already machines that perform certain important tasks independently without programmers fully understanding how they learned it.

The question therefore arises whether this could at some point become uncontrollable and dangerous for humanity.”

Isaac Asimov’s Three Laws of Robotics have become instrumental to how we think we could protect ourselves from rogue AI, or AI that began to come up with evil plans of its own. The laws dictate that a robot can’t harm people, or be programmed to harm people, but humans remain fearful of this type of AI being able to teach itself whatever it wants.

11220109 553774691448146 5891728633311530727 n Are Humans Smart Enough To Control Super Intelligent Machines?

Image Credit: Facebook

Because, of course, we don’t have any real way to enforce these laws across the board.

“We argue that total containment is, in principle, impossible, due to fundamental limits inherent to computing itself. Assuming that a superintelligence will contain a program that includes all the programs that can be executed by a universal Turing machine or input potentially as complex as the state of the world, strict containment requires simulations of such a program, something theoretically (and practically) impossible.”

In short, we don’t have the brain capacity to contain a superintelligent AI, and we also have no assurance that we’ll be able to figure out how to talk to the dang thing, or meet it on any kind of battlefield we’ve ever imagine.

iStock 1202870693 Are Humans Smart Enough To Control Super Intelligent Machines?

Image Credit: iStock

The programming language itself won’t be made by humans, so will we be able to understand it? To communicate?

All of this sounds scary, but more information will provide scientists the opportunity to come up with some sort of backup plan.

Finger’s crossed.

twistedsifter on facebook Are Humans Smart Enough To Control Super Intelligent Machines?