Skip to content

Good news! China and the US talk about AI dangers

    Sam Altman, the CEO of OpenAI, recently said China should play a key role in shaping the guardrails placed around the technology.

    “China has some of the best AI talent in the world,” Altman said at a lecture at the Beijing Academy of Artificial Intelligence (BAAI) last week. “Solving alignment for advanced AI systems requires some of the brightest minds from around the world – which is why I really hope Chinese AI researchers will make great contributions here.”

    Altman is in a good position to judge these issues. His company is behind ChatGPT, the chatbot that has shown the world just how fast AI capabilities are evolving. Such advancements have led scientists and technologists to place demands on the technology. In March, many experts signed an open letter calling for a six-month pause in the development of AI algorithms more powerful than those behind ChatGPT. Last month, executives including Altman and Demis Hassabis, CEO of Google DeepMind, signed a statement warning that AI could one day pose an existential risk akin to nuclear war or pandemics.

    Such statements, often signed by executives working on the technology they warn could kill us, can feel hollow. For some, they also miss the point. Many AI experts say it’s more important to focus on the damage AI can already cause by reinforcing societal biases and facilitating the spread of misinformation.

    BAAI Chairman Zhang Hongjiang told me that AI researchers in China are also deeply concerned about new possibilities in AI. “I really think so [Altman] is doing humanity a service by going on this tour, talking to different governments and institutions,” he said.

    Zhang said a number of Chinese scientists, including the director of BAY, had signed the letter calling for a pause in the development of more powerful AI systems, but he pointed out that BAY has long focused on more direct AI systems. risks. New developments in AI mean that we will “definitely increase efforts to fine-tune AI,” Zhang said. But he added that the problem is tricky because “smarter models can actually make things safer.”

    Altman was not the only Western AI expert to attend the BAAI conference.

    Also in attendance was Geoffrey Hinton, one of the pioneers of deep learning, a technology that underpins all modern AI, who left Google last month to warn people about the risks that increasingly sophisticated algorithms may soon pose.

    Max Tegmark, a Massachusetts Institute of Technology (MIT) professor and director of the Future of Life Institute, which organized the letter calling for a pause in AI development, also spoke about AI risks, while Yann LeCun, a another deep learning pioneer, suggested that the current alarm around AI risks may be a bit exaggerated.

    Wherever you stand in the doomsday debate, there’s something nice about the fact that the US and China share their positions on AI. The usual rhetoric revolves around the nations’ struggle to dominate the development of the technology, and it can seem that AI has become hopelessly embroiled in politics. For example, Christopher Wray, the head of the FBI, told the World Economic Forum in Davos in January that he is “deeply concerned” about the Chinese government’s AI program.

    Given that AI will be critical to economic growth and strategic advantage, international competition is not surprising. But no one benefits from developing the technology insecurely, and the rising power of AI requires a degree of cooperation between the US, China and other global powers.

    But as with the development of other “world-changing” technologies, such as nuclear power and the tools needed to combat climate change, finding common ground may lie with the scientists who best understand the technology.