Skip to content

UN officials push for regulation of AI at Security Council meeting

    The UN Security Council held its first session on Tuesday on the threat artificial intelligence poses to international peace and stability, and Secretary-General António Guterres called for a global watchdog to oversee a new technology that has at least as much fear summoned as hopes.

    Mr Guterres warned that AI could pave a path for criminals, terrorists and other actors seeking to cause “death and destruction, widespread trauma and profound psychological damage on an unimaginable scale”.

    Last year’s launch of ChatGPT — which can turn prompts into texts, mimic voices, and generate photos, illustrations, and videos — has raised alarms about disinformation and manipulation.

    On Tuesday, diplomats and leading experts on AI outlined to the Security Council the risks and threats — along with the scientific and social benefits — of the new emerging technology. Much is still unknown about the technology, even though its development is moving forward, they said.

    “It’s like building engines without understanding the science of combustion,” said Jack Clark, co-founder of Anthropic, an AI safety research firm. Private companies, he said, should not be the only creators and regulators of AI

    Mr Guterres said a UN watchdog should act as a governing body to regulate, monitor and enforce AI regulations, in much the same way other agencies oversee aviation, climate and nuclear energy.

    The proposed agency would consist of experts in the field who share their expertise with governments and administrative agencies that may not have the technical know-how to address the threats posed by AI

    But the prospect of a legally binding resolution on its governance remains distant. However, most diplomats did endorse the idea of ​​a global governance mechanism and a set of international rules.

    “No country will be left untouched by AI, so we need to engage and involve the broadest coalition of international actors from all sectors,” said British Foreign Secretary James Cleverly, who chaired the meeting as Britain passed the rotating Presidency of the Council. .

    Russia, which deviated from the Council’s majority position, expressed skepticism that there was enough knowledge about the risks of AI to consider it a source of threats to global instability. And China’s ambassador to the United Nations Zhang Jun opposed the creation of a set of global laws, saying international regulators must be flexible enough to allow countries to develop their own rules.

    The Chinese ambassador did say that his country opposed the use of AI as a “means to create military hegemony or to undermine a country’s sovereignty”.

    The military’s use of autonomous weapons on the battlefield or in another country for assassinations, such as the satellite-controlled AI robot Israel sent to Iran to kill a leading nuclear scientist, Mohsen Fakhrizadeh, was also brought up.

    Mr Guterres said the United Nations should come up with a legally binding agreement by 2026 banning the use of AI in automated weapons of war.

    prof. Rebecca Willett, director of AI at the University of Chicago’s Data Science Institute, said in an interview that when regulating the technology, it was important not to lose sight of the human behind it.

    The systems are not fully autonomous and the people who design them should be held accountable, she said.

    “This is one of the reasons the UN is looking at this,” Professor Willett said. “There really must be international repercussions so that a company based in one country cannot destroy another country without violating international agreements. Really enforceable regulations can make things better and safer.”