The tone of congressional hearings with tech industry executives in recent years can best be described as hostile. Mark Zuckerberg, Jeff Bezos and other tech greats have all been stripped down on Capitol Hill by lawmakers angry at their companies.
But on Tuesday, Sam Altman, the CEO of the San Francisco start-up OpenAI, testified before members of a Senate subcommittee and largely agreed with them on the need to leverage the increasingly powerful AI technology used within his company and others. as Google is created. and Microsoft.
In his first congressional testimony, Mr. Altman implored lawmakers to regulate artificial intelligence as committee members demonstrated a burgeoning understanding of the technology. The hearing underscored the deep uneasiness of technologists and the government about the potential harms of AI. But that uneasiness did not extend to Mr. Altman, who had a friendly audience with the members of the subcommittee.
The appearance of Mr. Altman, a 38-year-old Stanford University dropout and tech entrepreneur, was his baptism as the leading figure in AI. The boyish-looking Mr. Altman traded his usual pullover and jeans for a blue suit and tie for the three hour hearing.
Mr. Altman also spoke about his company’s technology at a dinner with dozens of House members on Monday night, and met privately with a number of senators ahead of the hearing, according to people who attended the dinner and meetings. He provided a loose framework for managing what happens next with the rapidly evolving systems that some believe can fundamentally change the economy.
“I think if this technology goes wrong, it could go pretty wrong. And we want to speak out on that,” he said. “We want to work with the government to prevent that.”
Mr. Altman made his public debut on Capitol Hill as interest in AI exploded. Tech giants have poured effort and billions of dollars into what they say is a transformative technology, even amid rising concerns about AI’s role in spreading misinformation, killing jobs and one day matching human intelligence.
That has put technology in Washington in the spotlight. President Biden said this month at a meeting with a group of AI company CEOs that “what you are doing has tremendous potential and tremendous danger.” Top leaders in Congress have also pledged AI regulation.
That members of the Senate Subcommittee on Privacy, Technology and the Law had no intention of roughing Mr. Altman was clear when they thanked Mr. Altman for his private meetings with them and for agreeing to be on the hearing to appear. New Jersey Democrat Cory Booker repeatedly called Mr. Altman by his first name.
Mr. Altman was joined at the hearing by Christina Montgomery, IBM’s chief privacy and trust officer, and Gary Marcus, a well-known professor and frequent critic of AI technology.
Mr Altman said his company’s technology could destroy some jobs as well as create new ones, and it will be important for “the government to figure out how we want to reduce that”. He proposed the creation of an agency that issues licenses to create large-scale AI models, safety regulations, and tests that AI models must pass before they are released to the public.
“We believe that the benefits of the tools we have deployed to date far outweigh the risks, but ensuring their safety is vital to our work,” said Mr. Altman.
But it was unclear how lawmakers would respond to the call to regulate AI. Congress’ track record on technical regulation is grim. Dozens of privacy, voice and security laws have failed over the past decade due to partisan bickering and fierce opposition from tech giants.
The United States lags the world behind in regulations on privacy, expression, and protection of children. It is also lagging behind in terms of AI regulation. Lawmakers in the European Union will introduce rules for the technology later this year. And China has drafted AI laws that comply with its censorship laws.
Senator Richard Blumenthal, a Connecticut Democrat and chairman of the Senate panel, said the hearing was the first in a series to learn more about the potential benefits and drawbacks of AI and ultimately “write the rules for it.”
He also acknowledged that Congress has historically failed to keep up with the introduction of new technologies. “Our goal is to demystify those new technologies and hold them accountable to avoid some of the mistakes of the past,” said Mr Blumenthal. “Congress failed to catch the moment on social media.”
Subcommittee members proposed an independent agency to oversee AI; rules forcing companies to disclose how their models work and which datasets they use; and antitrust rules to prevent companies like Microsoft and Google from monopolizing the burgeoning market.
“The devil is in the details,” said Sarah Myers West, chief executive of AI Now Institute, a center for policy research. She said Mr Altman’s suggestions for regulation do not go far enough and should include restrictions on how AI is used in policing and the use of biometrics. She noted that Mr. Altman showed no indication that he would delay development of OpenAI’s ChatGPT tool.
“It’s so ironic to see an attitude about concern about harm from people being quick to release for commercial use the system responsible for that harm,” Ms West said.
Some legislators at the hearing still showed the continuing gap in technology know-how between Washington and Silicon Valley. South Carolina Republican Lindsey Graham repeatedly asked witnesses whether a speech accountability shield for online platforms like Facebook and Google also applies to AI
Mr. Altman, calm and unflappable, tried several times to distinguish between AI and social media. “We need to work together to find a totally new approach,” he said.
Some members of the subcommittee also expressed an unwillingness to crack down on an industry that holds great economic promise for the United States and competes directly with adversaries such as China.
The Chinese are creating AI that “reinforces the core values of the Chinese Communist Party and the Chinese system,” says Delaware Democrat Chris Coons. “And I’m concerned about how we’re promoting AI that empowers and strengthens open markets, open societies, and democracy.”
Some of the most difficult questions and comments to Mr. Altman came from Dr. Marcus, who noted that OpenAI has not been transparent about the data it uses to develop its systems. He questioned Mr Altman’s prediction that new jobs will replace those killed by AI
“We have unprecedented opportunities here, but we also face a perfect storm of corporate irresponsibility, widespread implementation, lack of adequate regulation and inherent unreliability,” said Dr. Marcus.
Tech companies have argued that Congress should be wary of broad rules that lump different types of AI together. At Tuesday’s hearing, IBM’s Ms. Martin called for an AI law similar to the regulation proposed by Europe, which outlines different levels of risk. She called for rules aimed at specific uses, not regulating the technology itself.
“At its core, AI is just a tool, and tools can serve different purposes,” she said, adding that Congress should take a “precision regulation approach to AI.”