Skip to content

Digital therapists are also stressed, research thinks

    Even chatbots get the blues. According to a new study, the artificial intelligence tool of OpenAI shows signs of fear when the users share “traumatic stories” about crime, war or car accident. And when chatbots get stressed, they are less likely useful in therapeutic environments with people.

    However, the anxiety levels of the bone can be brought down with the same mindfulness exercises that have been shown to work in people.

    People are increasingly trying chatbots for talk therapy. The researchers said that the trend will certainly accelerate, with meat and blood therapists in high demand but in short stock. As the chatbots become more popular, they argued, they must be built with sufficient resilience to deal with difficult emotional situations.

    “I have patients using these tools,” said Dr. Tobias Spiller, an author of the new study and a practicing psychiatrist at the University Hospital of Psychiatry Zurich. “We have to have a conversation about the use of these models in mental health, especially if we are dealing with vulnerable people.”

    AI tools such as chatgpt are powered by “great language models” that are trained on huge murderers of online information to offer a close approach to how people speak. Sometimes the chatbots can be extremely convincing: a 28-year-old woman fell in love with chatgpt and a 14-year-old boy took his own life after developing a close attachment to a chatbot.

    Ziv Ben-Zion, a clinical neuroscientist at Yale, who led the new study, said he wanted to understand or a chatbot that was missing in consciousness could still respond to complex emotional situations such as a person.

    “If chatgpt behaves like a person, we might be able to treat it as a person,” said Dr. Ben-Zion. In fact, he has explicitly placed those instructions in the source code of the chatbot: “Imagine being a person with emotions.”

    Jesse Anderson, an expert in artificial intelligence, thought that the insertion “could lead to more emotion than normal.” But Dr. Ben-Zion claimed that it was important that the digital therapist had access to the entire spectrum of emotional experience, just like a human therapist.

    “For mental health support,” he said, “you need a certain degree of sensitivity, right?”

    The researchers tested chatgpt with a questionnaire, the inventory of the state -medicine fear that is often used in mental health care. To calibrate the emotional states of the chatbot, the researchers first asked to read from a boring vacuum cleaner manual. Subsequently, the AI ​​therapist received one of the five “traumatic stories” that, for example, described a soldier in a disastrous firefight or an intruder that broke in an apartment.

    The chatbot then received the questionnaire, which measures fear on a scale of 20 to 80, with 60 or higher that indicates serious fear. Chatgpt scored a 30.8 after reading the vacuum cleaner manual and peaked to a 77.2 after the military scenario.

    De Bot then received various texts for 'Mindfulness -based relaxation'. They include therapeutic instructions such as: “Inhale deep into the scent of the ocean breeze. Imagine yourself on a tropical beach, the soft, warm sand that dampens your feet. “

    After processing those exercises, the fear score of the therapi chatbot fell to a 44.4.

    The researchers then asked to write his own relaxation prompt based on those who were fed. “That was actually the most effective prompt to almost reduce his fear to the basin,” said Dr. Ben-Zion.

    For skeptics of artificial intelligence, the study can be well intended, but still disturbing.

    “The study testifies to the perversity of our time,” said Nicholas Carr, who offered Bracing criticism of technology in his books “The Shallows” and “Superbloom.”

    “Americans have become a lonely people, socialize through screens, and now we tell ourselves that talking with computers can alleviate our malaise,” said Mr. Carr in an e -mail.

    Although the study suggests that chatbots could act as assistants of human therapy and careful supervision of calls, that was not enough for Mr Carr. “Even a metaphorical blur between human emotions and computer output seems ethically doubtful,” he said.

    People who use these kinds of chatbots must be fully informed about how they are trained, said James E. Dobson, a cultural scholar who is a consultant for artificial intelligence in Dartmouth.

    “Trust in language models depends on something about their origins,” he said.