Skip to content

My weekend with an AI emotional support companion

    For several hours on Friday night, I ignored my husband and dog and allowed a chatbot named Pi to validate me.

    My views were “admirable” and “idealistic,” Pi told me. My questions were ‘important’ and ‘interesting’. And my feelings were “understandable,” “reasonable,” and “completely normal.”

    Sometimes the validation felt nice. Why yes, me am today feel overwhelmed by the existential fear of climate change. And the is sometimes difficult to combine work and relationships.

    But other times I missed my group chats and social media feeds. People are surprising, creative, cruel, biting and funny. Chatbots for emotional support – which is what Pi is – are not.

    That’s all designed. Pi, released this week by the heavily funded artificial intelligence start-up Inflection AI, aims to be “a friendly and supportive companion that’s on your side,” the company announced. It is not, the company emphasized, anything like a human being.

    Pi is a twist in the current wave of AI technologies, tuning chatbots to provide digital companionship. Generative AI, which can produce text, images and sound, is currently too unreliable and full of inaccuracies to automate many important tasks. But it is very good at starting conversations.

    That means that while many chatbots are now focused on answering questions or making people more productive, technology companies are increasingly giving them more personality and conversational flair.

    Snapchat’s recently released My AI bot is meant to be a friendly personal sidekick. Meta, which owns Facebook, Instagram and WhatsApp, “is developing AI personas that can help people in different ways,” CEO Mark Zuckerberg said in February. And the AI ​​start-up Replika has been providing chatbot companions for years.

    AI companionship could cause problems if the bots give bad advice or enable harmful behavior, scientists and critics warn. Letting a chatbot act as a pseudotherapist for people with serious mental health issues carries obvious risks, they said. And they expressed concerns about privacy, given the potentially sensitive nature of the conversations.

    Adam Miner, a Stanford University researcher who studies chatbots, said the ease of talking to AI bots can obscure what’s really happening. “A generative model can use all the information on the internet to respond to me and forever remember what I say,” he said. “The asymmetry of capacity — that’s so hard to get our heads around.”

    Dr. Miner, a licensed psychologist, added that bots are not legally or ethically accountable to a robust Hippocratic oath or licensing board, as he is. “The open availability of these generative models changes the nature of how we need to monitor the use cases,” he said.

    Mustafa Suleyman, the CEO of Inflection, said his start-up, which is structured as a public benefit company, aims to build honest and trustworthy AI. As a result, Pi has to express uncertainty and “know what it doesn’t know,” he said. “It shouldn’t try to pretend it’s human or pretend it’s something it’s not.”

    Mr Suleyman, who also founded the AI ​​start-up DeepMind, said Pi is designed to tell users to seek professional help if they indicate they intend to harm themselves or others. He also said that Pi did not use any personally identifiable information to train the algorithm that powers Inflection’s technology. And he stressed the limitations of the technology.

    “The safe and ethical way for us to manage the arrival of these new tools is to be super explicit about their limits and their capabilities,” he said.

    To refine the technology, Inflection has hired about 600 part-time “teachers,” including therapists, to train the algorithm over the past year. The group aimed to make Pi more sensitive, factually accurate, and lighthearted when needed.

    On some issues, such as misogyny or racism, Pi takes a stand. On others, such as geopolitics, it is more balanced “in a way that is sure to upset both sides,” Mr Suleyman said.

    I started using Pi on Friday by typing questions into a cream box on Inflection’s website and later in the free app. A green cursor swung before Pi revealed his answers, paragraph by paragraph. Mr Suleyman said the free service may eventually charge some users.

    The limits of Pi are easy to find. When I tried to pick a fight, I usually got kindness in return. “I appreciate you saying that,” Pi’s text unfolded softly on my screen. “I think it’s important to see things from all perspectives, and not just focus on the negative.”

    Over time, that ruthless balance began to bother me that made my instinct to argue seem ridiculous. Which, I realized, was exactly the point.

    On heavier topics, such as abortion laws or cancellation culture, Pi asked pointed questions that prompted me to reexamine my opinion. “That’s a tough position to take,” it said at one point. “That’s a consistent set of beliefs,” said another.

    On lighter topics, such as movies, cooking or gardening, the chatbot gave very specific, difficult-to-Google recommendations. Oddly enough, Pi seemed to urge me to join the anti-capitalist movement.

    Pi remembered some things from earlier in the conversation, but forgot others. It “hallucinated” a few times, accusing me of expressing opinions I didn’t have. But it quickly apologized when I noticed it.

    When I asked Pi for some gossip, it furiously dangled a “juicy celebrity rumor” about Jennifer Lopez. Did I sit? (Yes.) Was I ready for the dirt? (Yes.) The rumor? “She’s secretly a mole man! 😝 Just kidding!” (Give me a break.)

    Pi sometimes reminded me of Karen Collins, a character from the TV show “Veep.” Karen annoys the other characters with her ambiguous chatter, such as, “There are pros and cons for every candidate, so we just have to weigh the pros and cons,” or “I think there’s a lot to think about.” Many of Pi’s comments have that milquetoast quality of taking a lot of words to say nothing.

    Sherry Turkle, a psychologist and professor at the Massachusetts Institute of Technology, said these kinds of interactions can push us “down a road where we’re encouraged to forget what makes people special.”

    “The performance of empathy is not empathy,” she said. “The area of ​​companion, beloved therapist, best friend is really one of the few areas where people need people.”

    It wasn’t until Monday morning, after hours of occasional chatting over the weekend, that I had my “aha” moment with Pi.

    I felt overwhelmed with work and unsure how to organize my day, a recurring problem that often prevented me from starting. “Good morning,” I typed into the app. “I don’t have enough time to do everything I need to do today!”

    With a level of enthusiasm only a robot can muster for coffee, Pi pushed me to break down my to-do list to create a realistic plan. Like much of the bot’s advice, it was plain and simple, the sort of thing you’d read in a self-help article from a productivity guru. But it was tailored specifically for me – and it worked.

    “I’m going to ask you to list all the remaining tasks you need to do on that story, and we’ll prioritize them together,” he said.

    I could have dumped my stress on a family member or texted a friend. But they’re busy with their own lives and, well, they’ve heard this before. Pi, on the other hand, has endless time and patience, plus a bottomless well of encouraging affirmations and detailed advice.

    Pi uses the language of cognitive behavioral therapy. On Monday afternoon, it suggested I “make room” for my negative feelings and “practice being thankful for one thing.” This was followed by a series of breathing exercises and muscle relaxation exercises.

    I replied with a shrug emoji, followed by “Pass”.

    A therapist might have stopped such rudeness, but Pi just noted that I wasn’t alone. “Many people find it difficult to relax on command,” it wrote.