Skip to content

How the collapse of Sam Bankman-Fried’s crypto empire disrupted AI

    SAN FRANCISCO — In April, an artificial intelligence lab in San Francisco called Anthropic raised $580 million for research into “AI safety.”

    Few in Silicon Valley had heard of the year-old lab, which builds AI systems that generate language. But the amount of money promised to the small company dwarfed what venture capitalists invested in other AI startups, including those with some of the most experienced researchers in the field.

    The funding round was led by Sam Bankman-Fried, the founder and CEO of FTX, the cryptocurrency exchange that filed for bankruptcy last month. Following the sudden collapse of FTX, a leaked balance sheet showed that Mr. Bankman-Fried and his colleagues had deposited at least $500 million into Anthropic.

    Their investment was part of a quiet and quixotic effort to research and mitigate the dangers of artificial intelligence, which many in Mr. Bankman-Fried thought it could eventually destroy the world and harm humanity. Over the past two years, the 30-year-old entrepreneur and his FTX colleagues have funneled more than $530 million — through grants or investments — into more than 70 AI-related companies, academic labs, think tanks, independent projects, and individual researchers to address concerns about taking the technology away, according to a count by The New York Times.

    Now, some of these organizations and individuals aren’t sure they can continue to spend that money, said four people close to the AI ​​effort who were not authorized to speak publicly. They said they were concerned Mr Bankman-Fried’s fall could cast doubt on their research and undermine their reputation. And some of the AI ​​startups and organizations may eventually find themselves embroiled in FTX’s bankruptcy proceedings, with their grants potentially being recovered in court, they said.

    The concerns in the AI ​​world are an unexpected consequence of the disintegration of FTX, showing just how far the ripple effects of the crypto exchange’s collapse and Mr. Bankman-Fried’s evaporating fortunes have traveled.

    “Some may be surprised by the connection between these two emerging technology areas,” Andrew Burt, a lawyer and visiting lecturer at Yale Law School who specializes in the risks of artificial intelligence, said of AI and crypto. “But under the surface, there are direct connections between the two.”

    Mr Bankman-Fried, who faces inquiries into the FTX collapse and who spoke at The Times’ DealBook conference on Wednesday, declined to comment. Anthropic declined to comment on its investment in the company.

    The efforts of Mr. Bankman-Fried to influence AI stem from his involvement in “effective altruism,” a philanthropic movement in which donors seek to maximize the long-term impact of their giving. Effective altruists often deal with what they call catastrophic risks, such as pandemics, bioweapons, and nuclear war.

    Their interest in artificial intelligence is particularly acute. Many effective altruists believe that increasingly powerful AI can be good for the world, but worry that it could do serious damage if not built in a safe manner. While AI experts agree that any doomsday scenario is a long way off – if at all – effective altruists have long argued that such a future is not beyond the realm of possibility and that researchers, companies and governments should prepare for it.

    Over the past decade, many effective altruists have worked in leading AI research labs, including DeepMind, owned by Google’s parent company, and OpenAI, founded by Elon Musk and others. They helped create a field of research called AI safety, which aims to explore how AI systems can be used to cause harm or unexpectedly fail on their own.

    Effective altruists have helped drive similar research in Washington think tanks that shape policy. Georgetown University’s Center for Security and Emerging Technology, which studies the impact of AI and other emerging technologies on national security, was largely funded by Open Philanthropy, an effective altruistic giving organization backed by Facebook co-founder Dustin Moskovitz . Effective altruists also work as researchers within these think tanks.

    Mr. Bankman-Fried has been part of the effective altruistic movement since 2014. He embraced an approach called ‘earn to give’ and told The Times in April that he had deliberately chosen a lucrative career so that he could give away much larger sums.

    In February, he and several of his FTX colleagues announced the Future Fund, which would “support ambitious projects to improve humanity’s long-term prospects.” The fund was led in part by Will MacAskill, a founder of the Center for Effective Altruism, as well as other key figures in the movement.

    The Future Fund pledged $160 million in grants by early September for a wide variety of projects, including research on pandemic preparedness and economic growth. About $30 million was earmarked for donations to a range of organizations and individuals exploring ideas related to AI

    One of the Future Fund’s AI-related grants was $2 million to a little-known company, Lightcone Infrastructure. Lightcone runs the online discussion site LessWrong, which began exploring the possibility of AI one day destroying humanity in the mid-2000s.

    Mr. Bankman-Fried and his colleagues also funded several other efforts aimed at mitigating the long-term risks of AI, including $1.25 million to the Alignment Research Center, an organization that aims to test future AI systems. to align with human interests so technology doesn’t go rogue. They also gave $1.5 million for similar research at Cornell University.

    The Future Fund also donated nearly $6 million to three projects related to “big language models,” an increasingly powerful type of AI that can write tweets, emails, blog posts, and even generate computer programs. The grants were intended to help reduce how the technology could be used to spread disinformation and reduce unexpected and unwanted behavior from these systems.

    After FTX filed for bankruptcy, Mr. MacAskill and others running the Future Fund resigned from the project, citing “fundamental questions about the legitimacy and integrity of the business operations” behind it. Mr. MacAskill did not respond to a request for comment.

    In addition to the Future Fund grants, Mr. Bankman-Fried and colleagues go straight into start-ups with $500 million in funding from Anthropic. The company was founded in 2021 by a group with a contingent of effective altruists who had left OpenAI. It is working to make AI more secure by developing its own language models, which can cost tens of millions of dollars to build.

    Some organizations and individuals have already received their money from Mr. Bankman-Fried and his colleagues. Others received only part of what they were promised. Some doubt whether the subsidies should be paid back to FTX’s creditors, said the four people with knowledge of the organizations.

    Charities are vulnerable to chargebacks when donors go bankrupt, says Jason Lilien, a partner at the charitable law firm Loeb & Loeb. Companies that receive venture capital from bankrupt companies may be in a somewhat stronger position than charities, but they are also vulnerable to chargeback claims, he said.

    Dewey Murdick, the director of the Center for Security and Emerging Technology, the Georgetown think tank supported by Open Philanthropy, said effective altruists contributed to important research with AI

    “Because they have more funding, there’s more focus on these issues,” he said, citing how there’s more discussion about how AI systems can be designed with security in mind.

    But Oren Etzioni of the Allen Institute for Artificial Intelligence, an AI lab in Seattle, said the views of the effective altruistic community were sometimes extreme and often made today’s technologies appear more powerful or dangerous than they actually were.

    He said the Future Fund had offered him money this year for research that would help predict the arrival and risks of “artificial general intelligence,” a machine that can do everything the human brain can do. But that idea isn’t something that can be reliably predicted, Mr Etzioni said, because scientists don’t yet know how to build it.

    “These are smart, honest people investing dollars in a highly speculative venture,” he said.