At the end of March, more than 1,000 technology leaders, researchers and other experts working in and around artificial intelligence signed an open letter warning that AI technologies pose “serious risks to society and humanity”.
The group, which includes Elon Musk, the CEO of Tesla and the owner of Twitter, urged AI labs to halt development of their most powerful systems for six months so they could better understand the dangers behind the technology.
“Powerful AI systems should not be developed until we are confident that their effects will be positive and that their risks will be manageable,” the letter said.
The letter, which now has more than 27,000 signatures, was short. The language was broad. And some of the names behind the letter seemed to have a conflicting relationship with AI. For example, Musk is building his own AI startup and he is one of the top donors to the organization that wrote the letter.
But the letter represented a growing concern among AI experts that the latest systems, especially GPT-4, the technology introduced by San Francisco start-up OpenAI, could harm society. They believed that future systems will be even more dangerous.
Some risks have arrived. Others won’t for months or years. Still others are purely hypothetical.
“Our ability to understand what can go wrong with very powerful AI systems is very weak,” said Yoshua Bengio, a professor and AI researcher at the University of Montreal. “So we have to be very careful.”
Why are they concerned?
Dr. Bengio is perhaps the most important person who signed the letter.
Working with two other academics – Geoffrey Hinton, until recently a researcher at Google, and Yann LeCun, now chief AI scientist at Meta, the owner of Facebook – Dr. Bengio has spent the past four decades developing the technology that powers systems such as GPT-4. . In 2018, the researchers received the Turing Award, also known as ‘the Nobel Prize for computers’, for their work on neural networks.
A neural network is a mathematical system that learns skills by analyzing data. About five years ago, companies like Google, Microsoft, and OpenAI began building neural networks that learned from massive amounts of digital text, called large language models, or LLMs.
By locating patterns in that text, LLMs learn to generate text themselves, including blog posts, poems, and computer programs. They can even carry on a conversation.
This technology can help computer programmers, writers and other workers generate ideas and get things done faster. But dr. Bengio and other experts also warned that LLMs can teach unwanted and unexpected behaviors.
These systems can generate false, biased and otherwise toxic information. Systems like GPT-4 get facts wrong and fabricate information, a phenomenon called “hallucination.”
Companies are working on these problems. But experts like Dr. Bengio worry that if researchers make these systems more powerful, they will introduce new risks.
Short-term risk: disinformation
Because these systems deliver information with what appears to be complete confidence, it can be a struggle to separate truth from fiction when using them. Experts are concerned that people will rely on these systems for medical advice, emotional support and the raw information they use to make decisions.
“There’s no guarantee that these systems will be correct on whatever task you give them,” said Subbarao Kambhampati, a professor of computer science at Arizona State University.
Experts are also concerned that people will misuse these systems to spread disinformation. Because they can converse in human ways, they can be surprisingly persuasive.
“We now have systems that can communicate with us through natural language, and we can’t tell the real from the fake,” said Dr. Bengio.
Medium-term risk: job loss
Experts fear the new AI could be job killers. Today, technologies such as GPT-4 often complement human workers. But OpenAI recognizes that they could replace some employees, including people who curate content on the web.
They cannot yet duplicate the work of lawyers, accountants or doctors. But they could replace paralegals, personal assistants and translators.
A paper written by OpenAI researchers estimated that 80 percent of the US workforce could see at least 10 percent of their job duties impacted by LLMs and 19 percent of workers could see at least 50 percent of their jobs impacted.
“There is some evidence that steady jobs will disappear,” said Oren Etzioni, the founder and CEO of the Allen Institute for AI, a research lab in Seattle.
Long-term risk: loss of control
Some of the people who signed the letter also believe that artificial intelligence could slip out of our control or destroy humanity. But many experts say that’s a huge exaggeration.
The letter was written by a group from the Future of Life Institute, an organization dedicated to exploring existential risks to humanity. They warn that because AI systems often learn unexpected behavior from the vast amounts of data they analyze, they can run into serious, unexpected problems.
They worry that if companies connect LLMs to other Internet services, these systems could gain unexpected powers because they can write their own computer code. They say developers will create new risks if they let powerful AI systems run their own code.
“If you look at a straight-forward extrapolation from where we are now to three years from now, things are pretty weird,” said Anthony Aguirre, a theoretical cosmologist and physicist at the University of California, Santa Cruz and co-founder of the future of Life Institute.
“If you take a less likely scenario — where things really take off, where there’s no real governance, where these systems turn out to be more powerful than we thought they would be — it gets really, really crazy,” he said.
Dr. Etzioni said talking about existential risk was hypothetical. But he said other risks — particularly disinformation — were no longer speculation.
“Now we have some real problems,” he said. “They are bona fide. They require a responsible response. They may need regulation and legislation.”