HEALTH

Can chatbots serve as counselors? Only if that’s what you desire

When a manager at the artificial intelligence company OpenAI said that she had “a quite emotional, personal conversation” with the company’s popular chatbot ChatGPT, it raised controversy. “Never tried therapy before but this is probably it?” Lilian Weng made a message on X, previously known as Twitter, which sparked a barrage of criticism for allegedly underplaying mental illness.
However, a variant of the placebo effect described this week by research in the journal Nature Machine Intelligence may be able to account for Weng’s perspective on her experience with ChatGPT.

More than 300 individuals were given a preview of what to anticipate before being invited to engage with mental health AI programs by a team from Arizona State University and Massachusetts Institute of Technology (MIT).

The chatbot was described as being sympathetic to some, deceptive to others, and impartial to a third group.

More people than the other groups believed their chatbot therapists to be reliable when they were informed they were speaking with a sympathetic chatbot.

“From this study, we see that to some extent, the AI is the AI of the beholder,” said Pat Pataranutaporn, a co-author of the study.

Popular firms have been promoting AI applications for years that provide therapy, friendship, and other forms of mental health care.

But the subject of contention never really goes away.

Strange and empty

Critics are worried that robots would ultimately replace human employees rather than work in addition to them, as is the case in every other industry that AI threatens to upend.

Furthermore, there is fear that bots won’t likely do well in the field of mental health.

As a reaction to Weng’s first article on X, activist and programmer Cher Scarlett said, “Therapy is for mental well-being and it’s hard work.”

“Vibing to yourself is fine and all but it’s not the same.”

Some applications in the mental health sector have a troubled recent past, adding to the wider anxiety regarding AI.

 

Users of Replika, a well-known AI companion that is sometimes promoted as having advantages for mental health, have long claimed that the bot may be obsessive and violent against women.

 

Separately, a US charity organization by the name of Koko conducted a trial with 4,000 customers in February delivering counseling utilizing the GPT-3 and discovered that automated replies just did not function as therapy.

 

Rob Morris, a co-founder of the company, said on X, “Simulated empathy feels weird, empty.”

 

His results concurred with those of the MIT/Arizona researchers who claimed that some participants compared using chatbots to “talking to a brick wall.”

 

Morris was eventually need to explain himself, nevertheless, in the wake of harsh criticism of his experiment, mostly because it was not obvious if his subjects were aware of their involvement.

 

Weakened expectations

The results, according to Basel University’s David Shaw, who was not engaged in the MIT/Arizona research, were not unexpected.

 

However, he made the following observation: “It seems none of the participants were actually told all chatbots were bullshit.”

 

That may be the most accurate primer of all, he said.

 

However, the concept of the chatbot as therapist is entwined with the technological beginnings of the 1960s.

 

The first chatbot, named ELIZA, was created to mimic a particular kind of psychotherapy.

 

For half of the individuals, the MIT/Arizona researchers employed ELIZA, and for the other half, GPT-3.

 

Despite the fact that GPT-3 had a significantly bigger impact, people who were already inclined to be positive still saw ELIZA as reliable.

 

Weng works for the firm that develops ChatGPT, so it should come as no surprise that she has nothing but positive things to say about her experiences with it.

 

The MIT/Arizona researchers argued that society needs to control the stories being told about AI.

 

“The way that AI is presented to society matters because it changes how AI is experienced,” the article said.

 

“Priming a user to have lower or more negative expectations may be desirable.”

 

Related Articles

Back to top button