On Monday, OpenAI CEO Sam Altman outlined his vision for an AI-driven future of technological progress and global prosperity in a new personal blog post titled “The Intelligence Age.” The essay paints a picture of human progress accelerated by AI, with Altman suggesting that superintelligent AI could emerge within the next decade.
“It is possible that in a few thousand days (!) we will have superintelligence; it may take longer, but I am confident we will get there,” he wrote.
OpenAI’s current goal is to create AGI (artificial general intelligence), a term for hypothetical technology that can match human intelligence in performing many tasks without requiring specific training. Superintelligence, on the other hand, surpasses AGI and can be thought of as a hypothetical level of machine intelligence that can dramatically outperform humans at every intellectual task, perhaps even to an unfathomable degree.
Superintelligence (sometimes referred to as “ASI”, which stands for “artificial superintelligence”) is a popular but sometimes fringe topic within the machine learning community, and has been for years, especially since controversial philosopher Nick Bostrom wrote a book called Superintelligence: Paths, Dangers, Strategies in 2014. Former OpenAI co-founder and Chief Scientist Ilya Sutskever left OpenAI in June to start a company with the term in its name: Safe Superintelligence. Meanwhile, Altman himself has been talking about developing superintelligence since last year.
So how long is “a few thousand days”? There’s no way to say for sure. The likely reason Altman chose a vague number is because he doesn’t know exactly when ASI will arrive, but it sounds like he thinks it could happen within a decade. For comparison, 2,000 days is about 5.5 years, 3,000 days is about 8.2 years, and 4,000 days is almost 11 years.
It’s easy to criticize Altman’s vagueness here; no one can truly predict the future, but Altman, as CEO of OpenAI, is likely aware of AI research techniques in the pipeline that aren’t widely known to the public. So even if it’s couched in broad time frames, the claim comes from a notable source in the AI field, albeit one heavily invested in ensuring that AI progress doesn’t stall.
Not everyone shares Altman's optimism and enthusiasm. Computer scientist and frequent AI critic Grady Booch cited Altman's “several thousand days” prediction, writing on X: “I'm so tired of all the AI hype: it has no basis in reality and only serves to inflate valuations, rile up the public, [sic] headlines and distract from the real work happening in the computer world.”
Despite the criticism, it’s notable that the CEO of what is arguably the defining AI company of the moment is making a sweeping prediction about future possibilities, even if it means constantly trying to raise money. Building infrastructure to power AI services is at the top of the agenda for many tech CEOs these days.
“If we want to put AI in the hands of as many people as possible,” Altman writes in his essay, “we need to lower the cost of computing and make it abundant (which requires lots of energy and chips). If we don’t build enough infrastructure, AI will be a very limited resource that wars will be fought over and will become a tool for the rich.”
Altman's vision for “The Intelligence Age”
Elsewhere in the essay, Altman frames our current era as the dawn of “The Intelligence Age,” the next transformative technological era in human history, after the Stone Age, the Agricultural Age, and the Industrial Age. He credits the success of deep learning algorithms as the catalyst for this new era, stating simply, “How did we get to the threshold of the next leap in prosperity? In three words: deep learning worked.”
The OpenAI chief envisions AI assistants becoming increasingly capable, eventually forming “personal AI teams” that can help individuals accomplish almost anything they can imagine. He predicts that AI will enable breakthroughs in education, healthcare, software development and other fields.
While Altman acknowledges potential downsides and labor market disruptions, he remains optimistic about AI’s overall impact on society. He writes, “Prosperity alone won’t necessarily make people happy—there are plenty of poor rich people—but it would vastly improve the lives of people around the world.”
Even with AI regulation like SB-1047 the hot topic of the day, Altman didn’t specifically mention the sci-fi dangers of AI. Bloomberg columnist Matthew Yglesias wrote on X: “Notably, @sama doesn’t even pay lip service to existential risk concerns anymore; the only downsides he considers are labor market adjustment issues.”
While he is enthusiastic about the potential of AI, Altman also urges caution, albeit vaguely. He writes: “We must act wisely, but with conviction. The dawn of the Intelligence Age is a major development with extremely complex and extremely high stakes. It will not be an entirely positive story, but the upside is so enormous that we owe it to ourselves and the future to figure out how to navigate the risks ahead.”
Disruptions to the labor market aside, Altman doesn’t say the age of intelligence won’t be all positive. He closes with an analogy of an obsolete profession lost to technological change.
“Many of the jobs we do today would have seemed like an insignificant waste of time to people a few hundred years ago, but no one looks back and wishes they were a lamplighter,” he wrote. “If a lamplighter could see the world today, he would think the prosperity around him was unimaginable. And if we could fast-forward a hundred years from today, the prosperity around us would feel equally unimaginable.”