Skip to content

A look inside OpenAI's Raid on Thinking Machines Lab

    If anyone ever is making an HBO Max series about the AI ​​industry, this week's events will make up an entire episode.

    On Wednesday, OpenAI's CEO of applications, Fidji Simo, announced that the company had rehired Barret Zoph and Luke Metz, co-founders of Mira Murati's AI startup, Thinking Machines Lab. Zoph and Metz had left OpenAI at the end of 2024.

    We reported last night on two stories that formed around what led to the departure, and have since learned new information.

    A source with direct knowledge said Thinking Machines leadership believed Zoph was involved in an incident of serious misconduct while working at the company last year. That incident shattered Murati's trust, the source said, and disrupted the couple's working relationship. The source also claimed that Murati fired Zoph on Wednesday – before he knew he was going to OpenAI – due to what the company said were issues arising from the alleged misconduct. Around the time the company learned that Zoph was returning to OpenAI, Thinking Machines raised concerns internally about whether he had shared confidential information with competitors. (Zoph did not respond to several requests for comment from WIRED.)

    Meanwhile, Simo claimed in a Wednesday memo to employees that the hires had been in the works for weeks and that Zoph told Murati he was considering leaving Thinking Machines on Monday – ahead of the date he was fired. Simo also told employees that OpenAI does not share Thinking Machines' concerns about Zoph's ethics.

    In addition to Zoph and Metz, another former OpenAI researcher who worked at Thinking Machines, Sam Schoenholz, is rejoining the ChatGPT maker, according to Simo's announcement. At least two more Thinking Machines employees are expected to join OpenAI in the coming weeks, according to a source familiar with the matter. Technology reporter Alex Heath was the first to report the additional hires.

    Another source familiar with the matter refuted the perception that the recent personnel changes were entirely related to Zoph. “This was part of a long discussion at Thinking Machines. There were discussions and disagreements about what the company wanted to build – it was about the product, the technology and the future.”

    Thinking Machines Lab and OpenAI declined to comment.

    In the wake of these events, we've heard from several researchers at leading AI labs who are exhausted by the ongoing drama in their industries. This particular incident is reminiscent of OpenAI's brief ouster of Sam Altman in 2023, known within OpenAI as “the blip.” Murati played a key role as the company's then Chief Technology Officer, according to reporting from The Wall Street Journal.

    In the years since Altman's ouster, the drama in the AI ​​industry has continued, with the departures of co-founders of several major AI labs, including Igor Babuschkin of xAI, Daniel Gross of Safe Superintelligence, and Yann LeCun of Meta (after all, he co-founded Facebook's long-standing AI lab, FAIR).

    Some might argue that the drama is justified for a nascent industry whose spending contributes to U.S. GDP growth. And if you believe that any of these researchers could make a few breakthroughs on the road to AGI, it's probably worth following where they go.

    That said, many researchers started before ChatGPT's breakthrough success and seem surprised that their industry is now the source of near-constant surveillance.

    As long as researchers can continue to raise billion-dollar seed rounds on a whim, we suspect the power shifts in the AI ​​industry will continue apace. HBO Max writers, join us.

    How AI Labs train agents to do your job

    People in Silicon Valley have been thinking for decades about AI displacing jobs. In recent months, however, efforts to get AI to actually do economically valuable work have become much more sophisticated.

    AI labs are getting smarter about the data they use to create AI agents. Last week, WIRED reported that OpenAI has asked external contractors from the firm Handshake to upload examples of their real work from previous jobs to evaluate OpenAI's agents. The companies ask employees to remove all confidential data and personally identifiable information from these documents. While it's possible that trade secrets or names could be compromised, that's probably not what OpenAI is after (although the company could be in serious trouble if that happens, experts say).