Skip to content

AI regulation is in its ‘early days’

    Regulating artificial intelligence has been a hot topic in Washington in recent months, with lawmakers holding hearings and press conferences and the White House announcing voluntary AI security commitments by seven tech companies on Friday.

    But a closer look at the activity raises questions about how meaningful the actions are in shaping policy around the rapidly evolving technology.

    The answer is that it doesn’t make much sense yet. The United States is just at the beginning of what is likely to be a long and difficult road to creating AI rules, lawmakers and policy experts said. While there have been hearings, meetings with top tech executives at the White House, and speeches to introduce AI laws, it’s too early to predict even the roughest drafts of regulation to protect consumers and manage the risks the technology poses to jobs, the spread of disinformation, and security.

    “This is early days, and no one knows what a law will look like yet,” said Chris Lewis, president of consumer group Public Knowledge, which has called for the creation of an independent agency to regulate AI and other technology companies.

    The United States lags far behind Europe, where lawmakers are preparing to introduce an AI law this year that would place new restrictions on what are perceived to be the technology’s riskiest applications. In contrast, much disagreement remains in the United States about how best to handle a technology that many American lawmakers are still trying to understand.

    That fits with many of the tech companies, policy experts said. While some companies have said they welcome regulations around AI, they have also argued against strict regulations, similar to those in Europe.

    Here’s an overview of the state of AI regulation in the United States.

    The Biden administration has been on an accelerated listening journey with AI companies, academics and civil society groups. The effort began in May when Vice President Kamala Harris met with the CEOs of Microsoft, Google, OpenAI and Anthropic at the White House and urged the tech industry to take security more seriously.

    On Friday, representatives from seven tech companies appeared at the White House to announce a set of principles to make their AI technologies more secure, including third-party security controls and watermarking of AI-generated content to prevent the spread of misinformation.

    Many of the announced practices were already in place at OpenAI, Google, and Microsoft, or were on track to go into effect. They do not represent new regulations. Promises of self-regulation also fell short of what consumer groups had hoped.

    “Voluntary commitments are not enough when it comes to Big Tech,” said Caitriona Fitzgerald, deputy director of the Electronic Privacy Information Center, a privacy group. “Congress and federal regulators must put in place meaningful, enforceable guardrails to ensure the use of AI is fair, transparent and protects the privacy and civil rights of individuals.”

    Last fall, the White House introduced a blueprint for an AI Bill of Rights, a set of consumer protection guidelines featuring the technology. The guidelines are also not regulations and are not enforceable. This week, White House officials said they were working on an executive order on AI, but did not disclose details and timing.

    The loudest drumbeat in regulating AI has come from legislators, some of whom have introduced bills on the technology. Their proposals include the creation of an agency to oversee AI, accountability for AI technologies that spread disinformation, and the requirement of licensing new AI tools.

    Lawmakers have also held hearings on AI, including a hearing in May with Sam Altman, the CEO of OpenAI, which creates the ChatGPT chatbot. Some lawmakers tossed ideas for other regulations during the hearings, including nutrition labels to educate consumers about AI risks.

    The bills are in their early stages and so far have not had the support needed to move forward. Last month, Senate Majority Leader New York Democrat Chuck Schumer announced a months-long process for drafting AI legislation, including educational sessions for members in the fall.

    “In many ways we are starting from scratch, but I believe Congress is up to the challenge,” he said during a speech at the Center for Strategic and International Studies at the time.

    Regulators are starting to take action by monitoring a number of issues arising from AI

    Last week, the Federal Trade Commission opened an investigation into OpenAI’s ChatGPT, requesting information about how the company secures its systems and how the chatbot could potentially harm consumers by creating false information. The FTC Chair, Lina Khan, has said she believes the agency has sufficient power under consumer protection and competition laws to monitor problematic behavior by AI companies.

    “Waiting for Congress to act isn’t ideal given the usual timeline of congressional action,” said Andres Sawicki, a law professor at the University of Miami.