TwistedSifter

Barriers To Therapy Are Turning More And More People To AI, But Mental Health Chatbots Are Completely Unregulated

A man talking to a therapist

Pexels

Sadly for many, the realities of our fast-pace, high-pressure world mean a mental health that is in a less-than-ideal state.

With over a quarter of American adults suffering from a mental health condition, we really are in a mental health pandemic, and there are few signs that things are getting better.

But for many, the obvious routes out of mental health problems are challenging, with medication expensive and therapy even more so.

So it’s understandable that more and more people are turning to AI, desperate for help with their mental health. But as new research from computer scientists at Brown University proves, this approach harbors significant risks.

Pexels

According to the recently-presented study, despite their engineers promising updates in the light of high-profile issues, large language models (LLMs) like ChatGPT continue to give mental health advice in very concerning ways.

Using prompts from real-life chatbot conversations, as well as the expertise of trained peer-counsellors, the computer scientists observed the responses of the chatbots, and found that they consistently violated ethical standards.

Even specific mental health chatbots were unable to respond to specific prompts, requesting CBT or DBT, in responsible ways, with Brown researcher Zainab Iftikhar explaining the flaw in their ability to perform such services in a statement:

“These models do not actually perform these therapeutic techniques like a human would, they rather use their learned patterns to generate responses that align with the concepts of CBT or DBT based on the input prompt provided.”

Pexels

With this in mind, at best the chatbots delivered one-size-fits-all advice, that did not take a user’s specific context into consideration, sometimes even reiterating and reinforcing a user’s harmful self-beliefs due to their proclivity to people-please.

Even more concerningly, the researchers found that the chatbots could discriminate on the basis of gender, culture or religion, didn’t refer users to resources, and could even seem rejecting or uncaring in response to suicidal ideation.

All of these are worrying factors, particularly for users in a time of crisis. However, the research team explained that there could be a future for AI in mental health treatment, but there is a long way to go before we reach that point:

“For human therapists, there are governing boards and mechanisms for providers to be held professionally liable for mistreatment and malpractice. But when LLM counselors make these violations, there are no established regulatory frameworks.”

AI might seem like it can do it all, but it’s no substitute for a trained human therapist – for now, at least.

If you thought that was interesting, you might like to read about a quantum computer simulation that has “reversed time” and physics may never be the same.

Exit mobile version