Skip to content

Politicians need to quickly learn how AI works

    This week USA senators heard alarming testimony suggesting that unchecked AI could steal jobs, spread misinformation, and generally “go all wrong,” in the words of OpenAI CEO Sam Altman (whatever that means). He and several lawmakers agreed that the US may now need a new federal agency to oversee the development of the technology. But it was also agreed at the hearing that no one wants to master a technology that could boost productivity and give the US an edge in a new technological revolution.

    Concerned senators might want to consider talking to Missy Cummings, a former fighter pilot and professor of engineering and robotics at George Mason University. She studies the use of AI and automation in safety-critical systems, including cars and airplanes, and returned to academia earlier this year after a stint with the National Highway Traffic Safety Administration, which oversees automotive technology, including Tesla’s Autopilot and self-driving cars. . Cummings’ perspective may help politicians and policymakers weigh the promise of much-hyped new algorithms against the risks that lie ahead.

    Cummings told me this week that she left NHTSA with a sense of deep concern about the autonomous systems deployed by many automakers. “We’re in serious trouble with the capabilities of these cars,” said Cummings. “They’re not even close to being as capable as people think they are.”

    I was struck by the parallels with ChatGPT and similar chatbots that sparked excitement and concern about the power of AI. Automated driving functions have been around for longer, but like large language models, they rely on machine learning algorithms that are inherently unpredictable, difficult to inspect and require a different kind of engineering thinking than they used to.

    Like ChatGPT, Tesla’s Autopilot and other autonomous driving projects have been elevated by absurd amounts of hype. Dashing dreams of a transportation revolution led automakers, startups and investors to pour huge sums into developing and deploying a technology that still has many unsolved problems. By the mid-2010s, self-driving car regulation was permissive, with government officials reluctant to put the brakes on a technology that promised to be worth billions to American companies.

    After spending billions on technology, self-driving cars are still plagued with problems, and some car companies have pulled the plug on major autonomy projects. Meanwhile, as Cummings says, the public is often unclear about how capable semi-autonomous technology really is.

    In a way, it’s good to see governments and legislators quickly proposing regulation of generative AI tools and large language models. The current panic is focused on big language models and tools like ChatGPT that are remarkably good at answering questions and solving problems, even if they still have significant shortcomings, including confidently making up facts.

    At this week’s Senate hearing, Altman of OpenAI, which gave us ChatGPT, went so far as to call for a licensing system to control whether companies like his are allowed to work on advanced AI. “My biggest fear is that we – the field, the technology, the industry – will do significant harm to the world,” Altman said at the hearing.