Early last year, a hacker gained access to the internal messaging systems of OpenAI, the maker of ChatGPT, and stole data about the design of the company's AI technologies.
According to two people familiar with the incident, the hacker stole details from discussions on an online forum where employees discussed OpenAI’s latest technologies, but did not interfere with the systems where the company houses and builds its artificial intelligence.
OpenAI executives disclosed the incident to employees during a general meeting at the company's San Francisco office in April 2023, said the two people, who discussed confidential company information on condition of anonymity.
But executives decided not to go public because no customer or partner information was stolen, the two people said. Executives did not consider the incident a national security threat because they believed the hacker was a private individual with no known ties to a foreign government. The company did not notify the FBI or anyone else in law enforcement.
For some OpenAI employees, the news raised fears that foreign adversaries like China could steal AI technology that — while now largely a work and research tool — could ultimately jeopardize U.S. national security. It also raised questions about how seriously OpenAI took security and exposed divisions within the company over the risks of artificial intelligence.
Following the breach, Leopold Aschenbrenner, an OpenAI technical program manager focused on ensuring that future AI technologies don’t cause serious harm, sent a memo to OpenAI’s board of directors arguing that the company wasn’t doing enough to prevent the Chinese government and other foreign adversaries from stealing its secrets.
Mr. Aschenbrenner said that OpenAI fired him this spring for leaking other information outside the company and argued that his firing was politically motivated. He alluded to the breach in a recent podcast, but details of the incident have not been previously reported. He said that OpenAI’s security was not strong enough to protect against the theft of important secrets if foreign actors infiltrated the company.
“We appreciate the concerns Leopold raised while at OpenAI, and they did not lead to his departure,” said Liz Bourgeois, a spokeswoman for OpenAI. Referring to the company’s efforts to build artificial general intelligence, a machine that can do everything the human brain can, she added: “While we share his commitment to building safe AGI, we disagree with many of the claims he has made about our work since then.”
Fears that a hack of an American tech company could have ties to China are not unreasonable. Last month, Microsoft President Brad Smith testified on Capitol Hill about how Chinese hackers used the tech giant’s systems to launch a large-scale attack on federal government networks.
However, under federal and California law, OpenAI cannot ban people from working at the company based on their nationality. Policy experts have also noted that barring foreign talent from U.S. projects could significantly hamper AI progress in the United States.
“We need the best and brightest minds working on this technology,” Matt Knight, OpenAI’s chief security officer, told The New York Times in an interview. “There are some risks to it, and we need to figure those out.”
(The Times has sued OpenAI and its partner Microsoft for copyright infringement over news content related to AI systems.)
OpenAI isn’t the only company building increasingly powerful systems using rapidly improving AI technology. Some — most notably Meta, the owner of Facebook and Instagram — are freely sharing their designs with the rest of the world as open-source software. They believe that the dangers of current AI technologies are low, and that sharing code allows engineers and researchers across the industry to identify and fix problems.
Today’s AI systems can help spread disinformation online, including text, still images and, increasingly, video. They’re also starting to take away some jobs.
Companies like OpenAI and its competitors Anthropic and Google add restrictions to their AI applications before offering them to individuals and businesses, hoping to prevent people from using the apps to spread disinformation or cause other problems.
But there’s not much evidence that current AI technologies pose a significant national security risk. Studies by OpenAI, Anthropic and others over the past year have found that AI is no more dangerous than search engines. Daniela Amodei, a co-founder of Anthropic and the company’s president, said that the latest AI technology wouldn’t pose a significant risk if its designs were stolen or shared freely with others.
“If it was owned by someone else, could that be hugely damaging to a large part of society? Our answer is, 'No, probably not,'” she told The Times last month. “Could it accelerate something for a bad actor in the future? Maybe. It's really speculative.”
Yet researchers and tech executives have long worried that AI could one day fuel the creation of new bioweapons or help hack into government computer systems. Some even believe it could destroy humanity.
A number of companies, including OpenAI and Anthropic, are already locking down their tech operations. OpenAI recently launched a Safety and Security Committee to examine how it should address the risks posed by future technologies. The committee includes Paul Nakasone, a former Army general who led the National Security Agency and Cyber Command. He has also been appointed to OpenAI’s board of directors.
“We started investing in security years before ChatGPT,” Knight said. “We’re on a journey to not only understand the risks and stay ahead of them, but also to increase our resilience.”
Federal officials and state lawmakers are also pushing for government regulations that would ban companies from releasing certain AI technologies and fine them millions if their technologies cause harm. But experts say those dangers are still years or even decades away.
Chinese companies are building their own systems that are nearly as powerful as the leading U.S. systems. By some metrics, China has surpassed the United States as the world’s largest producer of AI talent, producing nearly half of the world’s top AI researchers.
“It's not crazy to think that China will soon overtake the US,” said Clément Delangue, CEO of Hugging Face, a company that hosts many of the world's open-source AI projects.
Some researchers and national security leaders argue that the mathematical algorithms at the heart of current AI systems are not yet dangerous, but could become so. They call for tighter controls on AI labs.
“Even if the worst-case scenarios are relatively low probability, if they have a large impact, then we have a responsibility to take them seriously,” Susan Rice, a former domestic policy adviser to President Biden and former national security adviser to President Barack Obama, said at an event in Silicon Valley last month. “I don’t think it’s science fiction, as many people like to say.”