Looking further afield, the researchers sought to replicate the performance of humans and baboons with artificial intelligence, using neural network models inspired by basic mathematical ideas about what a neuron does and how neurons are connected. These models – statistical systems powered by high-dimensional vectors, matrices that multiply layers upon layers of numbers – matched the performance of the baboons, but not that of humans; they failed to reproduce the regularity effect. However, when researchers created a staged model with symbolic elements — the model was given a list of properties of geometric regularity, such as right angles, parallel lines — it closely replicated human performance.
These results, in turn, pose a challenge to artificial intelligence. “I love the advancements in AI,” said Dr. dehaene. “It’s very impressive. But I believe there is a deep aspect missing, which is the processing of symbols” – that is, the ability to manipulate symbols and abstract concepts, as the human brain does. This is the subject of his latest book, “How We Learn: Why Brains Learn Better Than Any Machine…for Now.”
Yoshua Bengio, a computer scientist at the University of Montreal, agreed that today’s AI lacks anything to do with symbols or abstract reasoning. The work of Dr. Dehaene, he said, provides “evidence that human brains use abilities not yet found in state-of-the-art machine learning.”
That’s especially true, he said, when we combine symbols as we put together and reassemble bits of knowledge, which helps us generalize. This gap could explain the limitations of AI — a self-driving car, for example — and the system’s inflexibility when faced with environments or scenarios that differ from the training repertoire. And it’s an indication, said Dr. Bengio, where AI research should go.
dr. Bengio noted that from the 1950s to the 1980s, symbolic processing strategies dominated “good old-fashioned AI.” verifying the proof of a theorem). Then came statistical AI and the neural network revolution, which began in the 1990s and gained momentum in the 2010s. dr. Bengio pioneered this deep learning method, which was directly inspired by the network of neurons of the human brain.
His latest research proposes extending the capabilities of neural networks by training them to generate or represent symbols and other representations.
It’s not impossible to reason abstractly with neural networks, he said, “it’s just that we don’t know how to do it yet.” dr. Bengio has set up a big project with Dr. Dehaene (and other neuroscientists) to explore how humans’ conscious processing power can inspire and empower the next generation of AI. the end of the day, our understanding of how brains do,” said Dr. Bengio.