Skip to content

‘The Godfather of AI’ quits Google and warns of danger ahead

    Geoffrey Hinton was a pioneer of artificial intelligence. In 2012, Dr. Hinton and two of his graduate students at the University of Toronto technology that became the intellectual foundation for the AI ​​systems that the technology industry’s largest companies say hold the key to their future.

    On Monday, however, he officially joined a growing chorus of critics who say those companies are running into danger with their aggressive campaign to create products based on generative artificial intelligence, the technology that powers popular chatbots like ChatGPT.

    Dr. Hinton said he quit his job at Google, where he worked for more than a decade and has become one of the most respected voices in the field, so he can speak out about the risks of AI. said, now regrets his life’s work.

    “I console myself with the normal excuse: if I hadn’t done it, someone else would have,” Dr. Hinton said last week during a lengthy interview in the dining room of his Toronto home, a short walk from where he and his students broke through.

    Dr. Hinton’s journey from AI groundbreaker to doomsayer marks a remarkable moment for the tech industry at arguably its most significant turning point in decades. Industry leaders believe the new AI systems could be just as important as the introduction of the web browser in the early 1990s and lead to breakthroughs in areas ranging from drug research to education.

    But many industry insiders are terrified of releasing something dangerous into the wild. Generative AI can already be a tool for misinformation. Soon it could be a risk to jobs. Somewhere along the line, say the technology’s biggest concerns, it could be a risk to humanity.

    “It’s hard to see how you can prevent the bad actors from using it for bad things,” said Dr. Hinton.

    After San Francisco start-up OpenAI released a new version of ChatGPT in March, more than 1,000 technology leaders and researchers signed an open letter calling for a six-month moratorium on the development of new systems, because AI technologies are “serious pose risks to society”. and humanity.”

    A few days later, 19 current and former leaders of the Association for the Advancement of Artificial Intelligence, a 40-year-old academic society, released their own letter warning of the risks of AI. That group included Eric Horvitz, chief scientific officer at Microsoft, who has deployed OpenAI’s technology in a wide variety of products, including the Bing search engine.

    Dr. Hinton, often referred to as “the godfather of AI,” signed neither letter, saying he didn’t want to publicly criticize Google or other companies until he quit his job. He told the company last month that he was resigning and on Thursday spoke by phone with Sundar Pichai, the CEO of Google’s parent company Alphabet. He declined to publicly discuss the details of his conversation with Mr. Pichai.

    Google’s chief scientist, Jeff Dean, said in a statement: “We remain committed to taking a responsible approach to AI. We are constantly learning to understand emerging risks while innovating boldly at the same time.”

    Dr. Hinton, a 75-year-old British expatriate, is a lifelong academic whose career has been driven by his personal beliefs about the development and use of AI. In 1972, Dr. idea called a neural network. A neural network is a mathematical system that learns skills by analyzing data. At the time, few researchers believed in the idea. But it became his life’s work.

    In the 1980s, Dr. Hinton was a professor of computer science at Carnegie Mellon University, but left the university for Canada because he said he was reluctant to take Pentagon funding. At the time, most AI research in the United States was funded by the Department of Defense. Dr. Hinton strongly opposes the use of artificial intelligence on the battlefield – what he calls “robot soldiers”.

    In 2012, Dr. Hinton and two of his students in Toronto, Ilya Sutskever and Alex Krishevsky, built a neural network that could analyze thousands of photos and teach itself to identify common objects, such as flowers, dogs and cars.

    Google spent $44 million to acquire a company founded by Dr. Hinton and his two students. And their system led to the creation of increasingly powerful technologies, including new chatbots such as ChatGPT and Google Bard. Mr. Sutskever later became chief scientist at OpenAI. In 2018, Dr. Hinton and two other longtime collaborators received the Turing Award, also known as “the Nobel Prize for computing,” for their work on neural networks.

    Around the same time, Google, OpenAI, and other companies began building neural networks that learned from massive amounts of digital text. Dr. Hinton thought it was a powerful way for machines to understand and generate language, but it was inferior to the way humans interacted with language.

    Then, last year, when Google and OpenAI built systems with much larger amounts of data, his opinion changed. He still believed the systems were inferior to the human brain in some ways, but he thought they eclipsed human intelligence in others. “Maybe what’s happening in these systems,” he said, “is actually much better than what’s happening in the brain.”

    As companies improve their AI systems, they become increasingly dangerous, he says. “Look at how it was five years ago and how it is now,” he said of AI technology. “Take the difference and propagate it forward. That’s creepy.”

    Until last year, he said, Google acted as a “proper steward” for the technology, careful not to release anything that could cause damage. But now that Microsoft has augmented its Bing search engine with a chatbot — a challenge to Google’s core business — Google is rushing to deploy the same kind of technology. The tech giants are engaged in a competition that may be impossible to stop, said Dr. Hinton.

    His immediate concern is that the internet will be flooded with fake photos, videos and text, and that the average person will “no longer know what is true”.

    He is also concerned that AI technologies will eventually disrupt the labor market. Today, chatbots like ChatGPT often complement human employees, but they can replace paralegals, personal assistants, translators, and others who perform key tasks. “It takes away the tedious work,” he said. “It can take away more than that.”

    Along the way, he worries that future versions of the technology pose a threat to humanity, as they often learn unexpected behavior from the vast amounts of data they analyze. This becomes a problem, he said, because individuals and companies not only allow AI systems to generate their own computer code, but actually run that code themselves. And he fears a day when truly autonomous weapons – those killer robots – will become a reality.

    “The idea that this stuff could actually get smarter than humans — a few people believed that,” he said. “But most people thought it was far away. And I thought it was far away. I thought it was 30 to 50 years away or even longer. Of course I don’t think so anymore.”

    Many other experts, including many of his students and colleagues, say this threat is hypothetical. But dr. Hinton believes the race between Google and Microsoft and others will escalate into a global race that won’t stop without some form of global regulation.

    But that may be impossible, he said. Unlike with nuclear weapons, he said, there’s no way to know if companies or countries are secretly working on the technology. The best hope is that the world’s leading scientists will work together on ways to master the technology. “I don’t think they should scale this up further until they understand if they can get it under control,” he said.

    Dr. Hinton said that when people asked him how he could work on technology that was potentially dangerous, he would paraphrase Robert Oppenheimer, who led the US effort to build the atomic bomb: “If you see something that is technically beautiful, you can go ahead and do it.

    He doesn’t say that anymore.