ChatGPT has been fueled new hopes about the potential of artificial intelligence, but also new fears. Today, the White House joined the chorus of concerns, announcing it will support a massive hacking exercise at the Defcon security conference this summer to investigate generative AI systems from companies including Google.
The White House Office of Science and Technology Policy also said $140 million will be spent on launching seven new national AI research institutes focused on developing ethical, transformative AI for the public good, bringing the total number nationally to 25.
The announcement came hours before a meeting on the opportunities and risks of AI between US Vice President Kamala Harris and executives from Google and Microsoft, as well as the startups Anthropic and OpenAI, which created ChatGPT.
The White House’s AI intervention comes as the appetite for regulating the technology grows around the world, fueled by the hype and investment fueled by ChatGPT. In the European Union parliament, lawmakers are negotiating final updates to a sweeping AI law that will limit and even ban some uses of AI, including adding coverage of generative AI. Brazilian lawmakers are also considering regulations aimed at protecting human rights in the age of AI. Last month, the Chinese government announced draft regulations for generative AI.
In Washington, D.C., Democratic Senator Michael Bennett introduced a bill last week that would create an AI task force focused on protecting citizens’ privacy and civil rights. Also last week, four US regulatory agencies, including the Federal Trade Commission and the Department of Justice, jointly pledged to use existing laws to protect the rights of US citizens in the age of AI. This week, Democratic Senator Ron Wyden’s office confirmed plans to try again to pass a law, the Algorithmic Accountability Act, that would require companies to review and disclose their algorithms when an automated system is in use.
Arati Prabhakar, director of the White House Office of Science and Technology Policy, said at an event organized by Axios in March that government research into AI was necessary to make the technology useful. “If we want to seize these opportunities, we have to start wrestling with the risks,” Prabhakar said.
The White House-backed hacking exercise, designed to expose weaknesses in generative AI systems, will take place this summer at the Defcon security conference. Thousands of participants, including hackers and policy experts, will be asked to examine how generative models from companies like Google, Nvidia and Stability AI align with the Biden administration’s AI Bill of Rights announced in 2022 and a National Institute risk management plan. of Standards and Technology framework released earlier this year.
Points are awarded under a capture-the-flag format to encourage participants to test for a wide variety of bugs or unsavory behavior from the AI systems. The event is being run in concert with Microsoft, nonprofit SeedAI, the AI Vulnerability Database and Humane Intelligence, a nonprofit founded by data and social scientist Rumman Chowdhury. She previously led a group at Twitter working on ethics and machine learning, and organized a bias bounty that exposed biases in auto-cropping photos on the social network.