Skip to content

Google Sidelines Engineer who claims the AI ​​is sensitive

    SAN FRANCISCO — Google recently put an engineer on paid leave after rejecting its claim that its artificial intelligence is sensitive, raising yet another dispute over the company’s most advanced technology.

    Blake Lemoine, a senior software engineer at Google’s Responsible AI organization, said in an interview that he was on leave on Monday. The company’s human resources department said it had violated Google’s confidentiality policy. The day before his suspension, Mr. Lemoine, he handed documents to a US senator’s office claiming they provided evidence that Google and its technology were involved in religious discrimination.

    Google said its systems mimicked conversational exchanges and could riff on various topics, but lacked awareness. “Our team — including ethicists and technologists — have assessed Blake’s concerns according to our AI principles and informed him that the evidence does not support his claims,” ​​Brian Gabriel, a Google spokesperson, said in a statement. “Some in the wider AI community are considering the long-term possibility of conscious or general purpose AI, but there is no point in doing so by anthropomorphizing today’s conversational models that are not conscious.” The Washington Post first reported the suspension of Mr. lemon.

    For months, Mr. Lemoine argued with Google executives, executives and human resources over his startling claim that the company’s dialogue application language model, or LaMDA, had a mind and a soul. Google says hundreds of its researchers and engineers have spoken with LaMDA, an in-house tool, and have come to a different conclusion than Mr. lemon. Most AI experts believe that the industry is still a long way from computer awareness.

    Some AI researchers have long made optimistic claims about these technologies quickly reaching consciousness, but many others are extremely quick to dismiss these claims. “If you were using these systems, you’d never say something like that,” said Emaad Khwaja, a researcher at the University of California, Berkeley, and the University of California, San Francisco, who studies similar technologies.

    In pursuit of the AI ​​vanguard, Google’s research organization has been embroiled in scandal and controversy in recent years. The division’s scientists and other employees have regularly argued about technology and human resources in episodes that have often invaded the public arena. In March, Google fired a researcher who publicly disagreed with the published work of two of his colleagues. And the firing of two AI ethics researchers, Timnit Gebru and Margaret Mitchell, after criticizing Google’s language models, continued to cast a shadow on the group.

    Mr. Lemoine, a military veteran who has described himself as a priest, ex-con and AI researcher, told Google executives as senior as Kent Walker, the president of global affairs, that he believed LaMDA was a child of 7 or 8. years old. He wanted the company to ask permission for the computer program before any experiments were conducted on it. His claims were based on his religious beliefs, which he says were discriminated against by the company’s human resources department.

    “They have repeatedly questioned my sanity,” said Mr. lemon. “They said, ‘Have you been examined by a psychiatrist recently?’” In the months before he was placed on administrative leave, the company had suggested that he take psychiatric leave.

    Yann LeCun, the head of AI research at Meta and a key figure in the rise of neural networks, said in an interview this week that these kinds of systems are not powerful enough to achieve true intelligence.

    Google’s technology is what scientists call a neural network, a mathematical system that learns skills by analyzing large amounts of data. By pointing out patterns in, for example, thousands of cat photos, he can learn to recognize a cat.

    In recent years, Google and other leading companies have designed neural networks that have learned from vast amounts of prose, including thousands of unpublished books and Wikipedia articles. These “big language models” can be applied to many tasks. They can summarize articles, answer questions, generate tweets, and even write blog posts.

    But they are extremely flawed. Sometimes they generate perfect prose. Sometimes they generate nonsense. The systems are very good at mimicking patterns they’ve seen in the past, but they can’t reason like a human being.