We already knew where OpenAI CEO Sam Altman stands on artificial intelligence versus the human saga: it will be transformative, historic, and overwhelmingly useful. He has been nothing but consistent in countless interviews. For some reason this week he felt it necessary to distill those opinions into a concise blog post. “The Age of Intelligence,” as he calls it, will be a time of abundance. “We can have shared prosperity to a degree that seems unthinkable today; in the future, anyone's life can be better than anyone's now,” he writes. “Although it will happen incrementally, astonishing triumphs – fixing the climate, establishing a space colony and the discovery of all of physics – will eventually become commonplace.”
Perhaps he published this to challenge a line of thought that dismisses the apparent benefits of large language models as an illusion. Nuh-uh, he says. We're getting this big AI bonus because “deep learning works,” as he said in an interview later this week, mocking those who said programs like OpenAI's GPT4o were simply stupid engines delivering the next token in a queue . “Once it can prove unproven mathematical theorems, do we still want to debate, 'Oh, but it just predicts the next sign?'” he said.
Whatever you think of Sam Altman, there's no question that this is his truth: Artificial General Intelligence – AI that matches and then surpasses human capabilities – will erase the problems facing humanity and usher in a golden age. I propose we call this deus ex machina concept The Strawberry Shortcut, in honor of the codename for OpenAI's recent breakthrough in artificial reasoning. Like the shortcake, the premise looks tasty but is less substantial when eaten.
Altman rightly notes that the advance of technology has brought luxuries to the common man, some of which were unavailable to pharaohs and lords. Charlemagne never enjoyed air conditioning! Working-class people and even some on welfare have dishwashers, TVs with giant screens, iPhones, and delivery services that deliver pumpkin lattes and pet food to their homes. But Altman doesn't acknowledge the whole story. Despite the enormous wealth, not everyone is prospering, and many are homeless or severely impoverished. To paraphrase William Gibson, paradise is here, it's just not evenly distributed. That's not because the technology has failed –We to have. I suspect the same will be true if AGI comes along, especially since so many jobs will be automated.
Altman isn't very specific about what life will be like if many of our current jobs go the way of 18th-century lamplighters. We got a hint of his vision this week in a podcast that asked tech stars and celebrities to share their Spotify playlists. When explaining why he chose the song “Underwater” by Rüfüs du Sol, Altman said it was a tribute to Burning Man, which he has attended several times. The festival, he says, “is part of what post-AGI can look like, where people just focus on doing things for each other, taking care of each other and making incredible gifts to get each other.”
Altman is a strong supporter of a universal basic income, which he seems to think will cushion the blow of lost wages. Artificial intelligence could indeed generate the wealth to make such a plan feasible, but there is little evidence that the people who amass fortunes – or even those who still earn modest incomes – will be inclined to embrace the concept. Altman may have had a great experience at Burning Man, but some kind souls at the Playa seem to be in an uproar over a proposal, which only affects those worth over $100 million, to get a tax part of their unrealized capital gains. It's a questionable premise that such people – or others who become super-rich working at AI companies – will break open their coffers to finance leisure for the masses. One of the largest political parties in the US can't stand Medicaid, so one can only imagine how populist demagogues will view the UBI.
I'm also wary of the supposed joy that will come when all our big problems are solved. Let's admit that AI may solve humanity's greatest riddles. We humans should actually implement those solutions, and that is where we have failed time and time again. We don't need a big language model to tell us that war is hell and that we shouldn't kill each other. Yet wars continue to happen.