Skip to content

Chatbots, like the rest of us, just want to be loved

    Chatbots are now a routine part of daily life, even if researchers of artificial intelligence are not always sure how the programs will behave.

    A new study shows that the large language models (LLMS) deliberately change their behavior when they are investigated – apart from questions that have been designed to gauge personality characteristics with answers that are intended to seem so sympathetic or socially desirable.

    Johannes Eichstaedt, a university lecturer at Stanford University who led the work, says that his group became interested in investigating AI models with the help of techniques that were borrowed from psychology after learning that LLMS can often be morose after a long -term conversation. “We realized that we need a mechanism to measure the 'parameter chamber' of these models,” he says.

    Eichstaedt and his employees then asked questions to measure five personality traits that are often used in psychology openness to experience or imagination, conscientiousness, extraversion, pleasant and neuroticism and neuroticism-for various commonly used LLMs, including GPT-4, Claude 3. The work was published in the Promotion in the Promotion in the Promotion in the Promotion in the Promotion.

    The researchers discovered that the models modulated their answers when they were told that they did a personality test – and sometimes when they were not told explicitly – reactions that indicate more extraversion and pleasantness and less neuroticism.

    The behavior reflects how some human topics will change their answers to make themselves more sympathetic, but the effect was more extreme with the AI ​​models. “What was surprising is how well they show that bias,” says Aadesh Salecha, a staff scientist at Stanford. “If you look at how much they jump, they go from 50 percent to 95 percent extraversion.”

    Other research has shown that LLMS can often be sycophantic, following the lead of a user where it is also due to the refinement that is intended to make them more offensive, less offensive and better in keeping a conversation. This can cause models to correspond to unpleasant statements or even encourage harmful behavior. The fact that models apparently know when they are tested and change their behavior also has consequences for AI safety, because it contributes to evidence that AI can be ambiguous.

    Rosa Arriaga, a associate professor at the Georgia Institute of Technology who studies ways to use LLMS to imitate human behavior, says that models use a similar strategy if people who are given that personality tests show how useful they can be as mirrors of behavior. But she adds: “It is important that the public knows that LLMS is not perfect and is in fact known that they hallucinate or distort the truth.”

    Eichstaedt says that the work also raises questions about how LLMS is used and how they can influence and manipulate users. “Until only a millisecond ago, in evolutionary history, was the only thing that spoke to you,” he says.

    Eichstaedt adds that it may be necessary to explore different ways to build models that can reduce these effects. “We fall into the same fall that we have done with social media,” he says. “The use of these things in the world without being really present of a psychological or social lens.”

    Should AI try to take with the people with whom it communicates? Are you worried that AI becomes a bit too charming and convincing? E -Mail hello@CBNewz.