On Thursday, President Joe Biden held a meeting at the White House with CEOs from leading AI companies, including Google, Microsoft, OpenAI and Anthropic, highlighting the importance of ensuring the safety of AI products before deploying them. At the meeting, Biden urged executives to address the risks posed by AI. But some AI experts criticized the exclusion of ethics researchers who have been warning about the dangers of AI for years.
In recent months, generative AI models such as ChatGPT have rapidly gained popularity and sparked intense tech hype, driving companies to develop similar products at a rapid pace.
However, concerns have grown about potential privacy issues, employment bias, and the potential to use them to create misleading information campaigns. According to the White House, the administration called for greater transparency, security evaluations and protection against malicious attacks during a “candid and constructive discussion” with executives.
Among the most high-profile attendees of the meeting were Google’s Sundar Pichai, Microsoft’s Satya Nadella, OpenAI’s Sam Altman, and Anthropic’s Dario Amodei.
Vice President Kamala Harris chaired the meeting, and in a video of Biden “dropping by” posted on Twittersaid the president, “I just came by to say thank you. What you are doing has tremendous potential – and tremendous danger. I know you understand that. And I hope you can teach us what you think is most needed to both society as progress. This is really very important.”
Last fall, the Biden administration released a set of guidelines called the “AI Bill of Rights” that aim to protect Americans from the harmful effects of automated systems, including bias, discrimination and privacy concerns.
AI ethics researchers respond
While Biden’s invitation showed managerial interest in a current policy issue, critics of the invited companies’ ethical track record were unimpressed by the meeting. questioning the choice invite people to the meeting who they believe represent companies that have created the problems with AI that the White House is trying to address.
On Twitter, AI researcher Dr. Timnit Gebru wrote“It seems like we spend half our time talking to different legislators and agencies and STILL have this crap. to ‘protect people’s rights.'”
In 2020, Google fired Gebru after a dispute over a research paper she co-authored highlighting potential risks and biases in large-scale language models. The incident sparked discussion within the AI research community about Google’s commitment to AI ethics.
Similarly, Elizabeth Renieris, AI ethics researcher at the University of Oxford tweeted“Unfortunately, and with all due respect to POTUS, these are not the people who can tell us what is “most needed to protect society” when it comes to #AI.”
The criticism reflects the common divide between what is often referred to as “AI safety” (a loose movement primarily concerned with hypothetical existential risks of AI, openly associated with OpenAI contributors) and “AI ethics” (a group of researchers who largely concerned about misapplications and consequences of current AI systems, including bias and disinformation).
Along these lines, author Dr. Brandeis Marshall suggested organizing a “counter rally” to the White House rally, including Hugging Face, the Distributed AI Research Institute, the UCLA Center of Critical Internet Inquiry, and the Algorithmic Justice League.
Also on Thursday, the White House announced a $140 million investment to launch seven AI research institutes through the National Science Foundation. In addition, Anthropic, Google, Hugging Face, Nvidia, OpenAI and Stability AI will participate in public evaluations of their AI systems at DEF CON 31.
After the White House meeting, Harris released a statement saying, “The private sector has an ethical, moral and legal responsibility to ensure the safety and security of their products. And every company must abide by existing laws to protect the American people. I look forward to the continuation and follow-up in the coming weeks.”