Skip to content

Runaway AI is at risk of extinction, experts warn

    Main characters inside the development of artificial intelligence systems, including Sam Altman, CEO of OpenAI and Demis Hassabis, CEO of Google DeepMind, have signed a statement warning that the technology they are building may one day pose an existential threat to humanity similar to that of nuclear war and pandemics.

    “Reducing the risk of AI extinction should be a global priority alongside other societal-scale risks such as pandemics and nuclear war,” reads a one-sentence statement released today by the Center for AI Safety, a non-profit organization. profit organization.

    The idea that AI could become difficult to control and accidentally or deliberately destroy humanity has long been discussed by philosophers. But in the past six months, after some surprising and unnerving leaps in AI algorithm performance, the issue has become much more widely and seriously discussed.

    In addition to Altman and Hassabis, the statement was signed by Dario Amodei, CEO of Anthropic, a startup dedicated to developing AI with a focus on security. Other signatories include Geoffrey Hinton and Yoshua Bengio – two of three academics awarded the Turing Award for their work on deep learning, the technology underpinning modern advances in machine learning and AI – and dozens of entrepreneurs and researchers working on advanced AI problems.

    “The statement is a great initiative,” said Max Tegmark, a physics professor at the Massachusetts Institute of Technology and director of the Future of Life Institute, a nonprofit that focuses on the long-term risks of AI. In March, Tegmark’s Institute published a letter calling for a six-month pause in advanced AI algorithm development so that the risks could be assessed. The letter was signed by hundreds of AI researchers and executives, including Elon Musk.

    Tegmark says he hopes the statement will encourage governments and the general public to take AI’s existential risks more seriously. “The ideal outcome is for the AI ​​extinction threat to become mainstream so that everyone can talk about it without fear of ridicule,” he added.

    Dan Hendrycks, director of the Center for AI Safety, compared the current moment of concern about AI to the debate among scientists about the creation of nuclear weapons. “We need to have the conversations that nuclear scientists had before the creation of the atomic bomb,” Hendrycks said in a quote issued along with his organization’s statement.

    The current alarm is linked to several jumps in the performance of AI algorithms known as large language models. These models consist of a specific kind of artificial neural network trained on massive amounts of human-written text to predict the words that should follow a given sequence. Given enough data and with additional training in the form of feedback from humans about good and bad answers, these language models are able to generate text and answer questions with remarkable eloquence and apparent knowledge, even if their answers are often riddled with errors.

    These language models have proved increasingly coherent and capable as they gained more data and computational power. The most powerful model created to date, OpenAI’s GPT-4 is capable of solving complex problems, including those that seem to require some form of abstraction and common sense.