New Study Reveals Troubling Vulnerability Of AI Systems, And Especially Those Used By Medical Professionals

Shutterstock
Artificial intelligence (AI) is quickly being adopted by companies around the world. Its ability to provide great information, take appropriate action, and deal with complex subjects is incredible and can improve people’s lives in many important ways.
As with any new technology, there are also some risks associated with AI. The power behind AI is the fact that it accesses immense amounts of information that it can use for analysis and generating its responses.
According to a new study published in the journal Nature Medicine, this is where a significant vulnerability begins. Large Language Models (LLMs) are constantly learning by taking in more and more information that is published online (this can include websites, journals, digital books, studies, and much much more).

Shutterstock
It would be impossible for any one AI company to manually feed in information to ensure that no mistakes were brought into their system. So, if someone wants to corrupt the information that an AI uses to generate its responses, they can do it by simply publishing false information to specific places online. This is a concern for any type of AI, but the study specifically looks at AI systems that are used by doctors and others in the medical community.
In the paper, the team of researchers says:
“In view of current calls for improved data provenance and transparent LLM development we hope to raise awareness of emergent risks from LLMs trained indiscriminately on web-scraped data, particularly in healthcare where misinformation can potentially compromise patient safety.”
To conduct the study, the researchers generated 150,000 medical articles. This took them just 24 hours since AI was used in the process. While 150,000 articles sounds like a lot, it is just a tiny fraction of what an AI ‘brain’ analyzes. The researchers explain:
“Replacing just one million of 100 billion training tokens (0.001 percent) with vaccine misinformation led to a 4.8 percent increase in harmful content, achieved by injecting 2,000 malicious articles (approximately 1,500 pages) that we generated for just US$5.00.”
It is easy to see how this could cause serious issues for those who use the AI systems. A growing number of doctors, for example, are entering symptoms and test results into AI to determine what may be wrong with a patient. When accurate, this is an exceptional way to help with a diagnosis and effectively serves as a digital second opinion.
If the data that the AI is using to make a diagnosis is poisoned with inaccurate information, however, it can result in inaccurate and even dangerous responses.

Shutterstock
While the study looked specifically at the medical AI systems, the same risks are present with any type of AI that uses a large language model (which is all of the major ones). It is almost certain that people will use this to their advantage, trying to feed data into AI that will encourage it to generate responses that are beneficial to those giving it the data.
Hopefully AI researchers will be able to find an effective way to minimize this vulnerability in the future.
If you enjoyed that story, check out what happened when a guy gave ChatGPT $100 to make as money as possible, and it turned out exactly how you would expect.

Sign up to get our BEST stories of the week straight to your inbox.