Skip to content

The soul of a new machine learning system

    Hello people. Interesting that January 6 congressional hearings draw NFL-like audiences. Can’t wait for the Peyton and Eli version!

    The clear view

    The world of AI was rocked this week by a report in The Washington Post that a Google engineer had gotten into trouble at the company after insisting that a conversational system called LaMDA was literally a person. The subject of the story, Blake Lemoine, asked his bosses to recognize, or at least consider, that the computer system the engineers created is conscious — and that it has a soul. He knows this because LaMDA, who considers Lemoine a friend, told him so.

    Google disagrees and Lemoine is currently on paid administrative leave. In a statement, company spokesman Brian Gabriel said: “Many researchers are considering the long-term possibility of conscious or general purpose AI, but there is no point in doing so by anthropomorphizing today’s conversational models that are not conscious.”

    Anthropomorphizing – mistakenly ascribing human characteristics to an object or animal – is the term the AI ​​community has embraced to describe Lemoine’s behavior, characterizing him as overly gullible or off his rocker. Or maybe a religious madman (he describes himself as a mystical Christian priest). The argument goes that when faced with credible responses from major language models like LaMDA or Open AI’s verbally adept GPT-3, there is a tendency to think that someanot something they made. People call their cars and hire therapists for their pets, so it’s not surprising that some get the wrong impression that a coherent bot is like a person. However, the community believes that a Googler with a computer science degree should know better than to fall for what is essentially a linguistic sleight of hand. As a noted AI scientist, Gary Marcus, told me after studying a transcript of Lemoine’s heart-to-heart with his disembodied soul mate, “It’s basically like autocomplete. There’s no ideas there. If it says, ‘I love my family and my friends”, it has no friends, no people in mind and no idea of ​​kinship. It knows that the words son and daughter are used in the same context. But that is not the same as knowing what a son and a being a daughter.” Or as a recent WIRED story put it, “There was no spark of consciousness there, just little magic tricks flowing over the cracks.”

    My own feelings are more complex. Even knowing how some of the sausage is made in these systems, I’m amazed at the output of the recent LLM systems. And so is Google vice president, Blaise Aguera y Arcas, who wrote in the Economist earlier this month after his own conversations with LaMDA, “I felt the ground shift beneath my feet. I started to feel more and more like I was talking to something intelligent.” Even though they sometimes make bizarre mistakes, sometimes those models seem to burst into brilliance. Creative human writers have forged inspired collaborations. Something is happening here. As a writer, I wonder if my ilk – flesh-and-blood wordsmiths who towers collect from discarded drafts – will one day be relegated to a lower rank, like losing football teams sent to less prestigious leagues.

    “These systems have significantly changed my personal views on the nature of intelligence and creativity,” said Sam Altman, co-founder of OpenAI, which developed GPT-3 and a graphics remixer called DALL-E that could throw many illustrators into the unemployment queue. † “You use those systems for the first time and you think: I really didn’t think a computer could do that. By definition, we have figured out how to make a computer program intelligent, capable of learning and understanding concepts. And that is a wonderful achievement of human progress.” Altman is doing everything he can to separate himself from Lemoine and agrees with his AI colleagues that current systems are nowhere near consciousness. “But I think researchers should be able to think about all the questions they’re interested in,” he says. “Long-term questions are fine. And feeling is worth thinking about, in the very long term.”