Skip to content

Elon Musk ramps up AI efforts, even as he warns of dangers

    In December, Elon Musk became angry about the development of artificial intelligence and put his foot down.

    He had heard of a relationship between OpenAI, the start-up behind the popular chatbot ChatGPT, and Twitter, which he bought for $44 billion in October. OpenAI licensed Twitter’s data — a feed of every tweet — for about $2 million a year to help build ChatGPT, two experts said. Mr. Musk felt the AI ​​start-up was not paying Twitter enough, they said.

    So Mr. Musk cut OpenAI from Twitter’s data, they said.

    Since then, Mr. Musk has ramped up his own AI activities, while publicly arguing about the dangers of the technology. He is in talks with Jimmy Ba, a researcher and professor at the University of Toronto, to create a new AI company called X.AI, three experts said. He has hired top AI researchers from Google’s DeepMind on Twitter. And he has spoken publicly about creating a rival to ChatGPT that generates politically charged material without restrictions.

    The actions are part of Mr. Musk’s long and complicated history with AI, which is dominated by his conflicting views on whether the technology will ultimately benefit or destroy humanity. Even as he recently jump-started his AI projects, he also signed an open letter last month calling for a six-month pause on the technology’s development because of the “serious risks to society.”

    And although Mr. Musk balks at OpenAI and intends to compete with it, he helped found the AI ​​Lab in 2015 as a non-profit organization. He has since said he has become disillusioned with OpenAI because it no longer operates as a nonprofit, building technology that, in his opinion, takes sides in political and social debates.

    Where the AI ​​approach of Mr. Musk comes down to, is doing it yourself. The 51-year-old billionaire, who also runs electric car maker Tesla and rocket company SpaceX, has long viewed his own AI efforts as better, safer alternatives than those of his competitors, according to people who have discussed these matters with him.

    “He believes AI is going to be a major turning point and if it’s mismanaged, it’s going to be disastrous,” said Anthony Aguirre, a theoretical cosmologist at the University of California, Santa Cruz, and one of the founders of the Future of Life. Institute, the organization behind the open letter. “Like many others, he wonders: What are we going to do about that?”

    Mr. Musk and Mr. Ba, who are known for creating a popular algorithm used to train AI systems, did not respond to requests for comment. Their talks continue, said the three people familiar with the case.

    An OpenAI spokeswoman, Hannah Wong, said that while it was now generating profits for investors, it was still controlled by a non-profit organization and profits were limited.

    The roots of Mr. Musk in AI dates back to 2011. At the time, he was an early investor in DeepMind, a London start-up that began building artificial general intelligence, or AGI, in 2010, a machine that can do everything the human brain can do. can. Less than four years later, Google acquired the 50-person company for $650 million.

    At a 2014 space event at the Massachusetts Institute of Technology, Mr. Musk indicated he was hesitant to build AI himself.

    “I think we have to be very careful with artificial intelligence,” he said as he answered questions from the audience. “With artificial intelligence, we summon the demon.”

    That winter, the Future of Life Institute, which examines existential risks to humanity, hosted a private conference in Puerto Rico focused on the future of AI. Mr. Musk gave a speech there, arguing that AI could enter dangerous territory without anyone realizing it and announcing that he would help fund the institute. He gave $10 million.

    In the summer of 2015, Mr. Musk met privately with several AI researchers and entrepreneurs over dinner at Rosewood, a hotel in Menlo Park, California, famous for making deals in Silicon Valley. By the end of that year, he and several others who attended the dinner — including Sam Altman, then president of the start-up incubator Y Combinator, and Ilya Sutskever, a leading AI researcher — had founded OpenAI.

    OpenAI was set up as a non-profit organization, with Mr. Musk and others pledging $1 billion in donations. The lab vowed to make all of its research “open source,” meaning it would share the underlying software code with the world. Mr. Musk and Mr. Altman argued that the threat of malicious AI would be reduced if everyone, rather than just tech giants like Google and Facebook, had access to the technology.

    But when OpenAI began building the technology that would result in ChatGPT, many in the lab realized that sharing the software openly could be dangerous. Using AI, individuals and organizations may be able to generate and spread false information faster and more efficiently than they would otherwise. Many OpenAI employees said the lab should keep some of its ideas and code from the public.

    In 2018, Mr. Musk resigned from OpenAI’s board, in part due to his growing conflict of interest with the organization, two people familiar with the matter said. By then, he was building his own AI project at Tesla: Autopilot, the driver assistance technology that automatically steers, accelerates and decelerates cars on highways. To do this, he poached a key OpenAI employee.

    In a recent interview, Mr. Altman to specifically address Mr. Musk, but said Mr. Musk with OpenAI was one of many rifts within the company over the years.

    “There’s discord, distrust, egos,” Mr. Altman said. “The closer people are pointed in the same direction, the more contentious the disagreements are. You see this in sects and religious orders. There are bitter fights between the closest people.”

    After ChatGPT debuted in November, Mr. Musk increasingly critical of OpenAI. “We don’t want this to be some kind of demon from hell that maximizes profits, you know,” he said during an interview with former Fox News host Tucker Carlson last week.

    Mr. Musk renewed his complaints that AI was dangerous and accelerated his own efforts to build it. At a Tesla investor event last month, he called on regulators to protect society from AI, even as his car company has used AI systems to push the boundaries of self-driving technologies that have been involved in fatal crashes.

    That same day, Mr. Musk in a tweet that Twitter would use its own data to train technology along the lines of ChatGPT. Twitter has hired two DeepMind researchers, two people familiar with the hiring say. The Information and Insider previously reported details on Twitter’s hiring and AI efforts.

    During the interview last week with Mr. Carlson said Mr. Musk that OpenAI no longer served as a check on the power of tech giants. He wanted to build TruthGPT, he said, “a maximum truth-seeking AI that tries to understand the nature of the universe.”

    Last month, Mr. Musk X AI. The startup was founded in Nevada, according to its registration documents, which also list company officials such as Mr. Musk and his finance manager, Jared Birchall. The documents were previously reported by The Wall Street Journal.

    Experts who have discussed AI with Mr. Musk believe he is sincere in his concerns about the dangers of the technology, even if he builds it himself. Others said his position was influenced by other motivations, particularly his efforts to promote and profit from his businesses.

    “He says the robots are going to kill us?” said Ryan Calo, a professor at the University of Washington School of Law, who, along with Mr. Musk AI events attended. “A car that made his business has already killed someone.”