The White House has struck a deal with major AI developers – including Amazon, Google, Meta, Microsoft and OpenAI – requiring them to take action to prevent harmful AI models from being released into the world.
Under the agreement, which the White House calls a “voluntary commitment,” the companies pledge to conduct internal testing and allow external testing of new AI models before they are released publicly. The test looks for issues such as biased or discriminatory output, cybersecurity deficiencies, and risks of wider societal harm. Startups Anthropic and Inflection, both developers of notable rivals to OpenAI’s ChatGPT, also participated in the agreement.
“Companies have a duty to ensure their products are safe before introducing them to the public by testing the safety and capability of their AI systems,” Ben Buchanan, White House special adviser on AI, said in a briefing to reporters yesterday. The risks companies had to watch out for included privacy violations and even possible contributions to biological threats. The companies have also committed to publicly disclose the limitations of their systems and the safety and societal risks they may pose.
The agreement also says the companies will develop watermarking systems that make it easy for people to identify AI-generated audio and visuals. OpenAI already adds watermarks to images produced by its Dall-E image generator, and Google has said it is developing similar technology for AI-generated images. Helping people distinguish what’s real from what’s fake is a growing concern as political campaigns seem to focus on generative AI ahead of the 2024 US election.
Recent advances in generative AI systems that can create text or images have sparked a renewed AI arms race between companies adapting the technology for tasks such as searching the web and writing letters of recommendation. But the new algorithms have also sparked renewed concerns about AI reinforcing oppressive social systems such as sexism or racism, fueling election disinformation or becoming tools for cybercrime. As a result, regulators and legislators in many parts of the world, including Washington, D.C., have increased calls for new regulations, including requirements to review AI before deploying it.
It’s unclear how much the agreement will change how big AI companies operate. Growing awareness of the technology’s potential drawbacks has already made it common for tech companies to hire people to work on AI policy and testing. Google has teams that test its systems and publishes certain information, such as the intended use cases and ethical considerations for certain AI models. Meta and OpenAI sometimes invite outside experts to try and break their models in an approach called red-teaming.
“Guided by the enduring principles of safety, security and trust, the voluntary commitments address the risks associated with advanced AI models and promote the adoption of specific practices — such as red team testing and transparency reporting — that will advance the entire ecosystem,” said Microsoft president Brad Smith in a blog post.
The potential societal risks that companies should be aware of under the agreement do not include the environmental footprint of training AI models, a concern now often cited in research on the impact of AI systems. Creating a system like ChatGPT can require thousands of powerful computer processors running for extended periods of time.