At first glance, a recent series of research papers from a leading artificial intelligence lab at the University of British Columbia in Vancouver might not seem all that remarkable. Offering incremental improvements to existing algorithms and ideas, they read like the content of a mediocre AI conference or journal.
But the research is actually remarkable. That’s because it’s entirely the work of an “AI scientist” developed at the UBC lab, along with researchers from the University of Oxford and a startup called Sakana AI.
The project represents an early step toward what could prove to be a revolutionary trick: making AI learn by coming up with and exploring new ideas. They’re not exactly new, though. Several papers describe tweaks to improve an image-generating technique known as diffusion modeling; another outlines an approach to speeding up learning in deep neural networks.
“These aren't groundbreaking ideas. They're not terribly creative,” admits Jeff Clune, the professor who leads the UBC lab. “But they do seem like pretty cool ideas that someone could try.”
As amazing as today’s AI programs can be, they are limited by their need to consume human-generated training data. If AI programs could instead learn in an open way, by experimenting and exploring “interesting” ideas, they could unlock possibilities beyond anything humans have shown them.
Clune’s lab has previously developed AI programs that learned this way. For example, a program called Omni attempted to generate the behavior of virtual characters in various video game-like environments, discarding the characters that seemed interesting and then iterating on them with new designs. These programs previously required hand-coded instructions to define interestingness. But large language models provide a way for these programs to identify what’s most intriguing. Another recent project from Clune’s lab used this approach to have AI programs figure out the code that lets virtual characters do all sorts of things in a Roblox-like world.
The AI scientist is one example of Clune’s lab tinkering with the possibilities. The program designs machine learning experiments, decides what looks most promising using an LLM, then writes and runs the necessary code — rinse and repeat. Despite the disappointing results, Clune says open-ended learning programs, like language models themselves, could become far more capable as the computing power feeding them is ramped up.
“It feels like you’re exploring a new continent or a new planet,” Clune says of the possibilities LLMs offer. “We don’t know what we’re going to discover, but everywhere we look, there’s something new.”
Tom Hope, an assistant professor at the Hebrew University of Jerusalem and a researcher at the Allen Institute for AI (AI2), says that the AI scientist, like LLMs, seems highly derivative and cannot be considered trustworthy. “None of the components are trustworthy at this point,” he says.
Hope points out that efforts to automate elements of scientific discovery date back decades to the work of AI pioneers Allen Newell and Herbert Simon in the 1970s, and later the work of Pat
Langley at the Institute for the Study of Learning and Expertise. He also notes that several other research groups, including a team at AI2, have recently brought in LLMs to help generate hypotheses, write papers and review research. “They’ve captured the zeitgeist,” Hope says of the UBC team. “The direction is obviously incredibly valuable, potentially.”
Whether LLM-based systems can ever come up with truly new or groundbreaking ideas also remains unclear. “That’s the trillion-dollar question,” Clune says.
Even without scientific breakthroughs, open-ended learning could be vital to developing more capable and useful AI systems in the here and now. A report published this month by Air Street Capital, an investment firm, highlights the potential of Clune’s work to create more powerful and reliable AI agents, or programs that autonomously perform useful tasks on computers. The big AI companies all seem to see agents as the next big thing.
This week, Clune’s lab unveiled its latest open-ended learning project: an AI program that designs and builds AI agents. The AI-designed agents outperform human-designed agents on some tasks, such as math and reading comprehension. The next step is to figure out ways to prevent such a system from generating agents that misbehave. “It’s potentially dangerous,” Clune says of the work. “We have to get it right, but I think it’s possible.”