The excitement all around OpenAI CEO Sam Altman’s arrival in London was palpable from the queue that snaked around the University College London building ahead of his Wednesday afternoon speech. Hundreds of enthusiastic students and admirers of OpenAI’s chatbot ChatGPT had gathered to watch the UK leg of Altman’s world tour, where he expects to travel to some 17 cities. This week he has already visited Paris and Warsaw. Last week he was in Lagos. He then continues to Munich.
But the queue was accompanied by a small group of people who had traveled to loudly voice their fear that AI was advancing too quickly. “Sam Altman is willing to bet humanity on the hope of some kind of transhumanist utopia,” one protester shouted into a megaphone. Ben, another protester, who refused to share his last name in case it affected his job prospects, was also concerned. “We are particularly concerned about the development of future AI models that could be existentially dangerous to humanity.”
Speaking to a packed house of nearly 1,000 people, Altman seemed unphased. He wore a sharp blue suit with green patterned socks and spoke in clipped answers, always to the point. And his tone was optimistic as he explained how he thinks AI can revitalize the economy. “I’m excited that this technology can bring back the missing productivity gains of the past few decades,” he said. But while he didn’t mention the protests outside, he admitted to being concerned about how generative AI could be used to spread disinformation.
“People are already good at creating disinformation, and maybe the GPT models will make it easier. But I’m not afraid of that,” he said. “I think one thing will be different [with AI] is the interactive, personalized, persuasive power of these systems.”
While OpenAI plans to build in ways to ensure ChatGPT refuses to spread disinformation, and plans to create monitoring systems, he said, mitigating these consequences will be difficult when the company releases open-source models to the public – as it announced several weeks ago that it intends to do so. “The OpenAI techniques of what we can do on our own systems won’t work the same.”
Despite that warning, Altman said it’s important that artificial intelligence isn’t over-regulated while the technology is still emerging. The European Parliament is currently debating legislation called the AI Act, new rules governing how companies can develop such models and establishing an AI office to oversee compliance. However, the UK has decided to divide responsibility for AI between different regulators, including those on human rights, health and safety and competition, rather than creating a dedicated oversight body.
“I think it’s important to strike the right balance here,” Altman said, referring to debates now taking place among policymakers around the world about how to create rules for AI that protect societies from potential harm without curbing innovation. “The correct answer is probably something between the traditional European-British approach and the traditional American approach,” Altman said. “I hope we can all get it right this time.”