Skip to content

Everyone wants to regulate AI. No one can agree on how

    I agree with each of these points, which may lead us to the actual limits we might consider mitigating the dark side of AI. Things like sharing what it takes to train large language models, like the one behind ChatGPT, and allowing opt-outs for those who don’t want their content to be part of what LLMs present to users. Rules against built-in bias. Antitrust laws that stop a few giant corporations from creating an artificial intelligence cabal that homogenizes (and monetizes) virtually all the information we receive. And protecting your personal information as used by those know-it-all AI products.

    But reading that list also highlights how difficult it is to turn uplifting suggestions into binding law. If you look closely at the points of the White House blueprint, it is clear that they apply not only to AI, but to virtually everything in technology. Each seems to embody a user right that has been violated since forever. Big tech didn’t wait for generative AI to develop unfair algorithms, opaque systems, data misuse practices, and a lack of opt-outs. That’s the point, friend, and the fact that these issues are brought up in a discussion of a new technology only highlights the failure to protect citizens from the ill effects of our current technology.

    During that Senate hearing where Altman spoke, Senator after Senator sang the same refrain: We messed up when it came to regulating social media, so let’s not mess it up with AI. But there is no statute of limitations for making laws to curb past abuses. Last I checked, billions of people, including just about everyone in the US who has the means to poke a smartphone screen, are still on social media, being bullied, privacy compromised, and exposed to horror. There is nothing to prevent Congress from taking tougher action against those companies and, in particular, from passing privacy legislation.

    The fact that Congress has failed to do so casts serious doubt on the prospects for an AI bill. No wonder some regulators, notably FTC Chair Lina Khan, are not keen on new laws. She argues that the current law gives her agency enough jurisdiction to address the issues of bias, anti-competitive behavior and invasion of privacy that new AI products raise.

    Meanwhile, the difficulty of actually coming up with new laws — and the sheer volume of work that still needs to be done — was highlighted this week when the White House released an update on that AI Bill of Rights. It explained that the Biden administration is working up a sweat to come up with a national AI strategy. But apparently the ‘national priorities’ in that strategy are still not nailed down.

    Now the White House wants tech companies and other AI stakeholders — along with the general public — to submit answers to 29 questions about the benefits and risks of AI. Just as the Senate subcommittee asked Altman and his fellow panelists to propose a path forward, the administration is asking businesses and the public for ideas. In its request for information, the White House pledges to “consider any comment, whether it contains a personal story, experiences with AI systems, or technical legal, research, policy, or scientific materials, or other content.” (I breathed a sigh of relief when I saw that major language models aren’t solicited for comment, though I’m willing to bet that GPT-4 will be a big contributor despite this omission.)