Skip to content

Elon Musk’s hunger for destruction

    And yet, as Robert Lowell wrote, “No rocket goes as far as man.” In recent months, as outrage began to mount on Twitter and elsewhere, Musk seemed determined to squander much of the goodwill he had built up over his career. I asked Slavik, the plaintiffs’ attorney, if the recent shift in public opinion against Musk made his job easier in court. “I think at least there are more people who are skeptical of his judgment at the moment than before,” he said. “If I was on the other side, I’d be worried about it.”

    However, some of Musk’s most questionable decisions begin to make sense when viewed as the result of blunt utilitarian calculus. Last month, Reuters reported that Neuralink, Musk’s medical device company, had caused the needless deaths of dozens of laboratory animals through hasty experiments. Internal messages from Musk made it clear that the urgency came from above. “We’re just not moving fast enough,” he wrote. “It makes me crazy!” The cost-benefit analysis must have seemed clear to him: Neuralink had the potential to cure paralysis, he believed, which would improve the lives of millions of future people. The suffering of a smaller number of animals was worth it.

    This form of crude long-term thinking, where the sheer size of future generations gives them added ethical weight, is even reflected in Musk’s statements about buying Twitter. He called Twitter a “digital town square” responsible for nothing less than preventing another American civil war. “I didn’t do it to make more money,” he wrote. “I did it to try to help humanity, whom I love.”

    Autopilot and FSD represent the pinnacle of this approach. “The overarching goal of Tesla engineering,” Musk wrote, “is to maximize the area under the user’s happiness curve.” Unlike Twitter or even Neuralink, people died as a result of his decisions – but it doesn’t matter. In 2019, in a spirited email exchange with activist investor and steadfast Tesla critic Aaron Greenspan, Musk stumbled upon the suggestion that Autopilot was anything other than life-saving technology. “The data is unequivocal that Autopilot is safer than human driving by a significant margin,” he wrote. “It is unethical and incorrect of you to claim otherwise. That puts the public at risk.”

    I wanted to ask Musk to elaborate his risk philosophy, but he did not respond to my interview requests. So instead I spoke to Peter Singer, a leading utilitarian philosopher, to sort out some of the ethical issues. Was Musk right when he argued that anything that slows down the development and adoption of autonomous vehicles is inherently unethical?

    “I think he has a point,” Singer said, “if he’s right about the facts.”

    Musk rarely talks about Autopilot or FSD without mentioning how superior it is to a human driver. Speaking at a shareholder meeting in August, he said Tesla was “solving a really important part of AI, and one that could ultimately save millions of lives and prevent tens of millions of serious injuries just by driving an order of magnitude safer than humans. Musk does have data to back this up: Starting in 2018, Tesla has released quarterly safety reports to the public, showing consistent benefit from using Autopilot. The most recent, from late 2022, said Teslas with Autopilot enabled had a tenth chance of crashing like a regular car.

    That’s the argument Tesla needs to take to the public and to juries this spring. In the words of the company’s safety report, “While no car can prevent all accidents, we work every day to make them much less common.” Autopilot can crash in WW times, but without that technology we’d be on OOOOOOOOOOOOOOOOOOO.