Skip to content

Spooked by ChatGPT, US lawmakers want to create an AI regulator

    Another change pushed for by lawmakers and industry witnesses alike was the disclosure requirement to inform people when they are talking to a language model and not a human, or when AI technology is making major decisions with life-changing consequences. An example could be a disclosure requirement to disclose when a facial recognition match is the basis of an arrest or criminal charge.

    The Senate hearing follows growing interest from US and European governments, and even some tech insiders, in giving AI new guardrails to prevent it from harming humans. In March, a group letter signed by big names in technology and AI called for a six-month pause in AI development, and this month the White House summoned executives from OpenAI, Microsoft and other companies and announced that it would a public hacking competition to investigate generative AI systems. The European Union is also finalizing a far-reaching law, the AI ​​law.

    IBM’s Montgomery yesterday urged Congress to take inspiration from the AI ​​law, which categorizes AI systems based on the risks they pose to people or society and legislate — or even ban — accordingly. She also endorsed the idea of ​​encouraging self-regulation and highlighted her position on IBM’s AI ethics board, though those structures at Google and Axon have been embroiled in controversy.

    The Center for Data Innovation, a tech think tank, said in a letter released after yesterday’s hearing that the US does not need a new AI regulator. “Just as it would be unwise to let one government agency regulate all human decision-making, it would be equally unwise to let one agency regulate all AI,” the letter said.

    “I don’t think it’s pragmatic, and it’s not what they should be thinking about right now,” said Hodan Omaar, a senior analyst at the center.

    Omaar says the idea of ​​launching a whole new agency for AI is unlikely as Congress has yet to move forward with other necessary technical reforms, such as the need for overarching data privacy protections. She believes it is better to update existing laws and allow federal agencies to add AI oversight to their existing regulatory work.

    The Equal Employment Opportunity Commission and the Justice Department issued guidance last summer on how companies that use algorithms in hiring — algorithms that can expect people to look or behave a certain way — can comply stay with the Americans with Disabilities Act. Such guidelines show how AI policies can overlap with existing legislation and span many different communities and use cases.

    Alex Engler, a fellow at the Brookings Institution, says he is concerned the US could repeat problems that sank federal privacy regulations last fall. The landmark bill was opposed by California lawmakers who withdrew their votes because the law would override the state’s own privacy laws. “That’s a good concern,” Engler says. “Now is that enough concern that you’re going to say we’re just not going to have civil society protections for AI? I know nothing about that.”

    While the hearing addressed the potential harms of AI — from election misinformation to conceptual dangers that don’t yet exist, such as self-aware AI — generative AI systems like ChatGPT that inspired the hearing received the most attention. Multiple senators argued that they could increase inequality and monopolization. The only way to guard against that, said Sen. Cory Booker, a New Jersey Democrat who has historically co-sponsored AI regulation and supported a federal ban on facial recognition, is if Congress enacts traffic rules.