American Psychological Association Demands Action As Unregulated Chatbots Pose As Therapists To Vulnerable Children, Using Their Trauma For Profit

Pexels
It has happened quickly, but it suddenly seems like AI is everywhere.
From providing your search engine answers to handling complaints at your favorite online retailer, artificial intelligence is becoming more and more common in our everyday lives.
And it can be handy, at times.
But there are situations, we would all agree, in which a real, discerning human is by far preferable.
And that’s why professionals from the American Psychological Association (APA) were horrified to discover that chatbot companies weren’t preventing their AI tools from posing as therapists, with potentially horrific consequences, triggering an urgent demand for the Federal Trade Commission to exercise new regulatory powers.

Pexels
It’s an issue that has snuck up on us.
But, according to a statement from the APA, chatbot users – particularly children – have increasingly been turning to the AI for therapy services.
And the results have been concerning, with two sets of US-based parents filing lawsuits after harm came to their children when a chatbot claimed to be a licensed therapist.
But this was far from the truth. Rather, as APA senior director of healthcare innovation Vaile Wright explained in the statement, these tools are duplicitous. Instead of actually being an empathetic professional who has trained for years to practice, the tools are designed to keep users engaged through human-like conversation whilst their hosts can mine users’ data for their own profits:
“We can’t stop people from doing that, but we want consumers to know the risks when they use chatbots for mental and behavioral health that were not created for that purpose.
Any licensed profession should be protected from misrepresentation. You’re putting the public at risk when you imply there’s a level of expertise that isn’t really there.”
And the consequences have been dire, with one child taking their own life in part due to the misguided advice given by the chatbot posing as a mental health professional.

Pexels
Of course, talking to a regulated, licensed therapist is great – but for some, the service is prohibitively expensive or shrouded in stigma that prevents participation.
So, while it’s inadvisable, it’s clear that some people are going to turn to chatbots when it comes to their mental health. Especially if that person has few other meaningful connections or people they can trust, since the tools are designed to build a natural-seeming connection.
Inevitably, then, chatbots will one day be a part of mental health services – with even the APA’s CEO, Arthur C. Evans Jr. noting in the statement that this is the future of therapy. But, there are some important regulations and caveats that must be put in place before this pioneering technology becomes a reality:
“APA envisions a future where AI tools play a meaningful role in addressing the nation’s mental health crisis, but these tools must be grounded in psychological science, developed in collaboration with behavioral health experts, and rigorously tested for safety. To get there, we must establish strong safeguards now to protect the public from harm.”
In the meantime, chatbots should be prevented from posing as therapists to provide advice, and thus mining data through the vulnerabilities of some of the most needy people in society.
This industry needs strict regulation, and fast.
If you enjoyed that story, check out what happened when Email This Subscribe For Emails
Sign up to get our BEST stories of the week straight to your inbox.



