AI therapists effectively drop depression, unafraid of criticism, and people are more willing to be friends with AI.

A research team from Dartmouth College ( has developed an AI chat Bots named Therabot, and published their findings in the New England Journal of Medicine. The study found that the AI chat Bots had a significant effect on psychological counseling and could address the shortage of mental health professionals. However, it also acknowledged that AI psychotherapy still carries risks and requires close supervision and involvement from mental health experts.

Therabot's AI chatbot has significant effects in psychological counseling.

In an eight-week trial, 106 participants suffering from depression, anxiety, or eating disorders interacted with Therabot through a smartphone application, entering prompts about their feelings or initiating conversations when they needed to talk. The results showed:

Symptoms of depression patients decreased by 51%

Anxiety disorder patients' symptoms decreased by 31%

The weight and body image anxiety of individuals at risk of eating disorders decreased by 19%.

People are willing to be friends with Therabot because AI Bots do not judge them.

When people initiate a conversation with the application, Therabot responds with natural, open-text dialogue based on an original training set developed by researchers according to the best practices of evidence-based psychotherapy and cognitive behavioral therapy.

For example, if an anxious person tells Therabot that they have been feeling very tense and overwhelmed lately, it might respond, "Let's take a step back and see why you feel this way." If Therabot detects high-risk content such as suicidal thoughts during the conversation with the user, it will prompt the user to call 911 or contact a suicide prevention or crisis hotline, which the user can do by simply pressing a button on the screen.

The research findings indicate that people not only responded in detail to the prompts from Therabot but also frequently initiated conversations. The interactions with the software also showed an increasing trend related to discomfort during times such as midnight ).

Associate Professor of Biomedical Data Science and Psychiatry Nicholas Jacobson stated:

"We didn't expect that people would treat this software like a friend. This indicates that they are actually establishing a relationship with Therabot. My feeling is that people are also willing to talk to Bots because it won't judge them."

AI psychotherapy can supplement the shortage of professionals.

Researchers believe that while artificial intelligence therapy still urgently requires the supervision of clinical doctors, it has the potential to provide immediate support for many individuals who are unable to regularly access or need immediate assistance from mental health professionals.

Although face-to-face care is still irreplaceable, each healthcare provider in the United States treats an average of 1,600 patients with depression or anxiety. AI therapy does not require long waiting times or in-person visits, and it can provide results comparable to traditional psychotherapy.

We hope to see generative artificial intelligence help provide mental health support for the many people lacking face-to-face care systems, and we also see the potential for a combination of interpersonal therapy and software-based therapy.

AI psychological therapy still carries risks

Although these results are very promising, no generative artificial intelligence agent can operate completely autonomously in the field of mental health, as it may encounter a variety of high-risk scenarios. The team indicated an understanding of and the need to quantify the associated risks of using generative artificial intelligence in the mental health domain.

Since 2019, Therabot has been developed at the Dartmouth Artificial Intelligence and Mental Health Lab, with ongoing consultations with psychologists and psychiatrists from Dartmouth and Dartmouth's Health College.

However, research results indicate that the development and clinical testing of this system require strict safety, efficacy, and engagement standards, as well as close supervision and involvement from mental health professionals.

The research team must have intervention capabilities. If a patient expresses serious safety concerns, such as suicidal thoughts, or if the software's response does not align with best practices, the research team may need to intervene immediately.

Since the release of ChatGPT, many people have flocked to this field, easily proposing what seems to be a great proof of concept at first glance, but its security and effectiveness have yet to be well validated. And this is one of the cases that requires serious oversight.

This article discusses how AI therapists effectively reduce depression, and without the fear of being judged, people are more willing to befriend AI. It first appeared in Chain News ABMedia.

View Original
The content is for reference only, not a solicitation or offer. No investment, tax, or legal advice provided. See Disclaimer for more risks disclosure.
  • Reward
  • Comment
  • Share
Comment
0/400
No comments