Skip to content

Demis Hassabis, CEO of Google DeepMind, says the next algorithm will overshadow ChatGPT

    In 2014, DeepMind was acquired by Google after showing striking results from software that used reinforcement learning to master simple video games. Over the next few years, DeepMind showed how the technique does things that once seemed uniquely human, often with superhuman skill. When AlphaGo defeated Go champion Lee Sedol in 2016, many AI experts were stunned, thinking it would take decades for machines to become proficient at a game of such complexity.

    New thinking

    Training a large language model, such as OpenAI’s GPT-4, involves feeding large amounts of compound text from books, web pages, and other sources into machine learning software known as a transformer. It uses the patterns in that training data to become adept at predicting the letters and words that should follow a piece of text, a simple mechanism that proves remarkably powerful at answering questions and generating text or code.

    An important additional step in creating ChatGPT and similarly capable language models is using reinforcement learning based on human feedback on an AI model’s responses to fine-tune performance. DeepMind’s deep experience with reinforcement learning could enable its researchers to give Gemini new possibilities.

    Hassabis and his team could also try to improve the technology for large language models with ideas from other areas of AI. DeepMind researchers work in fields ranging from robotics to neuroscience, and earlier this week the company demonstrated an algorithm capable of performing manipulation tasks with a wide variety of different robotic arms.

    It is widely expected that learning from physical experience of the world, as humans and animals do, is important in making AI more capable. The fact that language models learn about the world indirectly, through text, is seen as a major limitation by some AI experts.

    Dark future

    Hassabis is tasked with accelerating Google’s AI efforts while managing unknown and potentially serious risks. Recent, rapid advances in language models have many AI experts, including some who build the algorithms, concerned about whether the technology will be used maliciously or become difficult to control. Some tech insiders have even called for a pause in the development of more powerful algorithms to prevent anything dangerous from emerging.

    Hassabis says the extraordinary potential benefits of AI, such as for scientific discoveries in areas such as health or climate, make it imperative that humanity does not stop developing the technology. He also believes that mandating a break is impractical as it is nearly impossible to enforce. “If done correctly, it will be the most beneficial technology for humanity ever,” he says of AI. “We have to go after those things with courage and bravery.”

    That doesn’t mean Hassabis is in favor of developing AI in a hasty rush. DeepMind researched the potential risks of AI before ChatGPT appeared, and Shane Legg, one of the company’s co-founders, has spent years running an “AI safety group” within the company. Hassabis joined other high-profile AI figures last month in signing a statement warning that AI could one day pose a risk akin to nuclear war or a pandemic.

    One of the biggest challenges right now, Hassabis says, is identifying the risks of more capable AI. “I think more research needs to be done by the field — very urgently — on things like evaluation testing,” he says, to determine how capable and controllable new AI models are. To that end, he says, DeepMind can make its systems more accessible to outside scientists. “I’d like to see academia get early access to these cutting-edge models,” he says — a sentiment that, if heeded, could help allay concerns that experts outside big companies are being excluded from the latest AI -research.

    How concerned should you be? Hassabis says no one is really sure AI is going to be a major threat. But he is sure that if progress continues at the current rate, there will not be much time to develop safeguards. “I can see the kind of things we build in the Gemini series, and we have no reason to believe they won’t work,” he says.