Skip to content

White House unveils initiatives to mitigate AI risks

    The White House on Thursday announced its first new initiatives to tame the risks of artificial intelligence since an explosion of AI-powered chatbots led to growing calls to regulate the technology.

    The National Science Foundation plans to spend $140 million on new research centers dedicated to AI, White House officials said. The government also pledged to release draft guidelines for government agencies to ensure their use of AI “safeguards the rights and safety of the American people,” adding that several AI companies had agreed to make their products available for research in August. at a cybersecurity conference.

    The announcements came hours before Vice President Kamala Harris and other government officials were scheduled to meet with the CEOs of Google, Microsoft, OpenAI, the maker of the popular ChatGPT chatbot, and Anthropic, an AI start-up, to discuss the technology. discuss. A senior government official said Wednesday that the White House intended to convince companies that they had a responsibility to address the risks of new AI developments. The White House is under increasing pressure to scrutinize AI capable of cutting-edge prose and lifelike imagery. The explosion of interest in the technology started last year when OpenAI released ChatGPT to the public and people immediately started using it to search for information, do school work and help them with their work. Since then, some of the biggest tech companies have rushed to include chatbots in their products and accelerate AI research, while venture capitalists have poured money into AI start-ups.

    But the rise of AI also raises questions about how the technology will transform economies, shake up geopolitics and amplify criminal activity. Critics worry that many AI systems are opaque but extremely powerful, with the potential to make discriminatory decisions, replace people in their jobs, spread disinformation and perhaps even break the law.

    President Biden recently said it “remains to be seen” whether AI is dangerous, and some of his top appointees have pledged to intervene if the technology is used in a harmful way.

    Sam Altman, standing, the CEO of OpenAI, will meet with Vice President Kamala Harris on Thursday. Credit…Jim Wilson/The New York Times

    Spokesmen for Google and Microsoft declined to comment ahead of the White House meeting. An Anthropic spokesperson confirmed that the company would be attending. An OpenAI spokeswoman did not respond to a request for comment.

    The announcements build on past government attempts to put guardrails on AI. Last year, the White House released a so-called “Blueprint for an AI Bill of Rights,” which stated that automated systems should protect users’ data privacy, protect them from discriminatory outcomes, and explain why certain actions were taken. In January, the Department of Commerce also released a framework for mitigating risk in AI development, which has been in the works for years.

    The introduction of chatbots such as ChatGPT and Google’s Bard has put enormous pressure on governments to act. The European Union, which was already negotiating regulations for AI, has faced new demands to regulate a wider range of AI, rather than just systems deemed inherently high risk.

    In the United States, members of Congress, including New York Senator Chuck Schumer, the majority leader, have taken steps to draft or propose legislation to regulate AI. law enforcement agencies in Washington.

    A group of government agencies pledged in April to “monitor the development and use of automated systems and promote responsible innovation” while punishing violations of the law committed using the technology.

    In a guest essay in The New York Times on Wednesday, Lina Khan, the chair of the Federal Trade Commission, said the nation was at a “major decision point” with AI. She compared the technology’s recent advancements to the birth of tech giants like Google and Facebook, warning that without proper regulation, the technology could entrench the power of the biggest tech companies and give scammers a powerful tool.

    “As the use of AI becomes more widespread, government officials have a responsibility to ensure that this hard-to-learn history does not repeat itself,” she said.