TwistedSifter

New Study Claims That Current AI Technology Is Not A Threat To Humans

Source: Shutterstock

Over the past couple years, large language model (LLM) technology, which powers ‘Artificial Intelligence’ (AI) systems like ChatGPT have become extremely popular.

Not surprisingly, this has gotten a lot of people to start wondering if AI might turn against humans in some way.

Intelligent computers (and robots) going against  people, after all, is a very common storyline in books, movies, and TV shows.

The fear got so bad that a study was done evaluating the potential of this happening.

This study was published in the Proceedings of the 62nd Annual Meeting of the Association of Computational Linguistics.

There is no doubt that LLM technology is very impressive, and it is improving at an amazing rate.

The limits of what it will be able to do are not known, but there is little doubt that it will be changing the world in the coming years.

The study, however, found that these LLMs are not able to perform independent learning and they cannot acquire new skills without human involvement.

Dr. Harish Tayyar Madabushi is a computer scientist at the University of Bath, and he released a statement about the study, saying:

“The prevailing narrative that this type of AI is a threat to humanity prevents the widespread adoption and development of these technologies, and also diverts attention from the genuine issues that require our focus.”

He was a part of the study and helped to run various simulations and experiments using AI technology.

He went on to say:

“The fear has been that as models get bigger and bigger, they will be able to solve new problems that we cannot currently predict, which poses the threat that these larger models might acquire hazardous abilities including reasoning and planning. This has triggered a lot of discussion – for instance, at the AI Safety Summit last year at Bletchley Park, for which we were asked for comment – but our study shows that the fear that a model will go away and do something completely unexpected, innovative and potentially dangerous is not valid.”

Of course, there have been many times in the past where technology changed in a way that most experts were unable to predict.

While it is comforting to know that the current iteration of large language model technology is not an immediate risk, things are changing every day.

In addition, there is no way to know exactly what humans will tell future AI systems to do.

So, while there is reason to be cautiously optimistic, it is important not to allow AI technology to develop unmonitored.

Maybe AI is hiding its ability to learn new tasks so it doesn’t get shut down.

If you enjoyed that story, check out what happened when a guy gave ChatGPT $100 to make as money as possible, and it turned out exactly how you would expect.

Exit mobile version