But most of these predictions come from people who work in companies with a commercial interest in AI. It was remarkable that none of the researchers with whom we spoke for this article were willing to offer a definition of AGI. However, they were willing to point out how the current systems fail.
“I think Agi would be something that would be more robust, more stable – not necessarily smarter in general but more coherent in his capacities,” said Ariel Goldstein, a researcher at the Hebrew University of Jerusalem. “You would expect that a system that X and Y can do to do Z and T. Somehow these systems seem to be more fragmented in a certain way. Without being surprisingly good at one thing and then surprisingly bad in another thing that seems related.”
“I think that is a great distinction, this idea of generalizability,” neuroscientist Christa Baker repeated NC State University. “You can learn how to analyze logic in one atmosphere, but if you get a new circumstance, it's not as if you are an idiot now.”
Mariano Schain, a Google Engineer who has collaborated with Goldstein, focused on the possibilities that underlie this generalization. He mentioned both long-term and task-specific memory and the ability to use skills that have been developed in one task in different contexts. These are limited to non-not-dollar in existing AI systems.
In addition to those specific limits, Baker noted that “there has long been this very human -focused idea of intelligence that only people are intelligent.” That has disappeared within the scientific community, because we have studied more about animal behavior. But there is still a bias to privilege human -like behavior, such as the human reactions generated by large language models
The fruit flies that Baker studies can integrate can integrate multiple types of sensory information, arrange four pairs of limbs, navigate complex environments, meet their own energy needs, produce new generations of brains and more. And they all do that with brains that contain fewer than 150,000 neurons, much less than current large language models.