“A lot of headlines have said I think it should be stopped now — and I never said that,” he says. “First of all, I don’t think that’s possible, and I think we should keep developing it because it can do great things. But we must make an equal effort to mitigate or prevent the possible bad consequences.”
Hinton says he didn’t leave Google to protest its handling of this new form of AI. In fact, he says the company proceeded relatively cautiously despite being ahead in the field. Researchers at Google invented a type of neural network known as a transformer, which has been critical to the development of models such as PaLM and GPT-4.
In the 1980s, Hinton, a professor at the University of Toronto, and a handful of other researchers tried to give computers more intelligence by training artificial neural networks with data instead of programming them in the conventional way. The networks could process pixels as input, and as they saw more samples, they adjusted the values connecting their crudely simulated neurons until the system could recognize the content of an image. The approach has shown promise over the years, but it wasn’t until a decade ago that its real power and potential became apparent.
In 2018, Hinton received the Turing Award, the most prestigious award in computer science, for his work on neural networks. He received the award along with two other pioneering figures, Yann LeCun, Meta’s foremost AI scientist, and Yoshua Bengio, a professor at the University of Montreal.
At that point, a new generation of multi-layered artificial neural networks—feeding massive amounts of training data and running on powerful computer chips—suddenly outperformed any existing program at labeling the content of photos.
The technique, known as deep learning, was the start of a renaissance in artificial intelligence, with big tech companies scrambling to recruit AI experts, build increasingly powerful deep learning algorithms, and apply them to products such as facial recognition, translation and speech recognition.
Google hired Hinton in 2013 after acquiring his company, DNNResearch, founded to commercialize his college lab’s deep learning ideas. Two years later, Ilya Sutskever, one of Hinton’s graduates who had also joined Google, left the search company to co-found OpenAI as a non-profit counterweight to the power Big Tech companies were amassing in AI.
Since its inception, OpenAI has focused on increasing the size of neural networks, the amount of data they gobble up, and the computing power they consume. In 2019, the company reorganized into a for-profit company with outside investors and later acquired $10 billion from Microsoft. It has developed a series of remarkably fluid text generation systems, most recently GPT-4, which powers the premium version of ChatGPT and has stunned researchers with its ability to perform tasks that seem to require reasoning and common sense.
Hinton believes we already have technology that will be disruptive and destabilizing. He points to the risk, as others have done, that more sophisticated language algorithms will be able to run more sophisticated disinformation campaigns and interfere in elections.