A open letter signed by hundreds of prominent artificial intelligence experts, tech entrepreneurs and scientists calls for a pause in the development and testing of AI technologies more powerful than OpenAI’s language model GPT-4, so that the risks it can yields can be well studied.
It warns that language models like GPT-4 can already compete with humans on a growing range of tasks and can be used to automate jobs and spread misinformation. The letter also raises the distant prospect of AI systems that could replace humans and remake civilization.
“We call on all AI labs to immediately suspend training of AI systems more powerful than GPT-4 (including the currently trained GPT-5) for at least 6 months,” said the letter, whose signatories Yoshua Bengio are, a professor at the University of Montreal who is considered a pioneer of modern AI, historian Yuval Noah Harari, Skype co-founder Jaan Tallinn and Twitter CEO Elon Musk.
The letter, which was written by the Future of Life Institute, an organization that focuses on technological risks to humanity, adds that the break should be “public and verifiable” and that anyone working on advanced AI models such as GPT -4 should be involved. It doesn’t suggest how a halt in development can be verified, but adds that “if such a pause cannot be put in place quickly, governments should step in and institute a moratorium,” something that seems unlikely to happen within six months.
Microsoft and Google did not respond to requests for comment on the letter. The signatories purportedly include people from numerous technology companies that build advanced language models, including Microsoft and Google. Hannah Wong, a spokesperson for OpenAI, says the company spent more than six months working on GPT-4 security and alignment after training the model. She adds that OpenAI is not currently training GPT-5.
The letter comes as AI systems make increasingly daring and impressive leaps. GPT-4 was only announced two weeks ago, but its capabilities have generated much excitement and considerable concern. Available through ChatGPT, OpenAI’s popular chatbot, the language model scores highly on many academic tests and can correctly solve tricky questions that are widely believed to require more sophisticated intelligence than AI systems have previously demonstrated. Yet GPT-4 also makes numerous trivial logical errors. And like its predecessors, it sometimes “hallucinates” false information, betrays deep-seated societal prejudices, and can be induced to say hateful or potentially harmful things.
Part of the letter’s signatories’ concern is that OpenAI, Microsoft, and Google have entered a for-profit race to develop and release new AI models as quickly as possible. At such a pace, the letter states, developments are moving faster than society and regulators can accept.