Skip to content

This AI forecast predicts Stormen Vooruit

    The year is 2027. Powerful artificial intelligence systems become smarter than people and cause the global order. Chinese spies have stolen the AI ​​secrecels of America and the White House almost to take revenge. Within a leading AI Laboratory, engineers are afraid to discover that their models are starting to mislead them, so that the opportunity is to be rogue.

    These are not scenes from a sci-fi scenario. They are scenarios presented by a non -profit in Berkeley, California, called the AI ​​Futures Project, which has spent the past year to predict what the world will look like in the coming years, because increasingly powerful AI systems are being developed.

    The project is led by Daniel Kokotajlo, a former OpenAi researcher who left the company last year about his worries that it acted recklessly.

    While in OpenAi, where he was in the director, Mr. Kokotajlo wrote detailed internal reports about how the race for artificial general intelligence, or AGIUSZzy term for machine intelligence on human machine could unfold. After departure, he collaborated with Eli Lifland, an AI researcher who had a track record of accurately predicting world events. They went to work to predict the next wave of AI.

    The result is “AI 2027”, a report and website that has been released this week that, in a detailed fictional scenario, describes what could happen if AI systems surpass intelligence on a human level that the authors expect to happen in the next two to three years.

    “We predict that AIS will continue to improve to the point where they are completely autonomous agents who are better than people towards the end of 2027 or so,” said Mr. Kokotajlo in a recent interview.

    Nowadays there is no shortage of speculation about AI. San Francisco was grabbed by AI Fervor and the technical scene of the Bay Area has become a collection of warring trunks and splinters sects, each convinced that it knows how the future will unfold.

    Some AI forecasts have taken the form of a manifesto, such as 'Machines of Loving Grace', an essay of 14,000 words, written last year by Dario Amodei, the Chief Executive of Anthropic, or 'situational consciousness', a report from the former OpenAi researcher Leopold Aschenbrenn.

    The People at the Ai Futures project designed that of them as a prediction scenario – essentially a lot rigorous investigated Science Fiction that uses their best guesses about the future as plot points. The group spent almost a year improving hundreds of predictions about AI, then they brought in a writer – Scott Alexander, who writes the blog Astral Codex ten – to help their predictions convert into a story.

    “We took what we thought would happen and tried to make it fascinating,” said Mr Lifland.

    Critics of this approach could claim that fictional AI stories are better at making people than training them. And some AI experts will undoubtedly object to the central statement of the group that will catch up with artificial intelligence human intelligence.

    Ali Farhadi, the Chief Executive of the Allen Institute for Artificial Intelligence, an AI Laboratory in Seattle, assessed the report “AI 2027” and said he was not impressed.

    “I am entirely for projections and predictions, but this prediction does not seem to be based on scientific evidence, or the reality of how things evolve in AI,” he said.

    There is no doubt that some views of the group are extreme. (Mr Kokotajlo, for example, told me last year that he believed that there was a 70 percent chance that AI would destroy humanity or harm catastrophic.) And Mr Kokotajlo and Mr Lifland have both ties with effective altruism, another philosophical movement that is popular among technical employees That has been making serious warnings about AI for years.

    But it is also worth noting that some of the largest companies of Silicon Valley are planning to plan a world outside of Agi, and that many of the crazy predictions have made AI in the past-as the opinion that machines would endure the Turing test, a thought experiment that determines whether a machine can communicate when a human being.

    In 2021, the year before Chatgpt was launched, Mr. Kokotajlo wrote a blog post entitled “What 2026 Looks Like”, in which he would progress his view of how AI systems would progress. A number of his predictions turned out to be before and he was convinced that this kind of prediction was valuable and that he was good at it.

    “It is an elegant, useful way to communicate your opinion to other people,” he said.

    Last week Mr. Kokotajlo and Mr. Lifland me for their office-a small room in a co-workspace of Berkeley called Constellation, where a number of AI safety organizations hang a gravel to show me how they work.

    Mr. Kokotajlo, who wore a brown coat in military style, grabbed a marker and wrote four abbreviations on a large whiteboard: sc> sar> siar> asi. Everyone, he explained, represented a milestone in the AI ​​development.

    First, he said, somewhere in the beginning of 2027, when the current trends apply, AI will be a superhuman coder. Dan, mid-2027, it will be an superhuman AI researcher-a autonomous agent who can oversee teams from AI coders and make new discoveries. Then, at the end of 2027 or early 2028, it will be a superintelligent AI Researcher – A machine intelligence that knows more than we do about building advanced AI, and can automate his own research and development, essentially build smarter versions of themselves. From there, he said, it is a short hop to artificial super intelligence, or ASI, at what point all bets are eliminated.

    If all this sounds fantastic … well, that's it. Nothing at a distance such as what Mr. Kokotajlo and Mr. Predicting Lifland is possible with today's AI tools, who can hardly order a burrito on Doordash without getting stuck.

    But they are convinced that these blind spots will shrink quickly, because AI systems will be good enough in coding to accelerate AI research and development.

    Their report focuses on OpenBrain, a fictional AI company that builds a powerful AI system that is known as agent-1. (They did not decide to choose a certain AI company, instead to create a composite from the leading American AI Laboratories.)

    As Agent-1 gets better in coding, it starts to automate much of the technical work at OpenBrain, so that the company can move faster and helps to build Agent-2, an even more capable AI researcher. Towards the end of 2027, when the scenario ends, Agent-4 makes AI investigation breakdown for a year and threatens to get villains.

    I asked Mr. Kokotajlo what he thought would happen afterwards. Did he think, for example, that life in the year 2030 would still be recognizable? Would the streets of Berkeley be filled with humanoid robots? People who text their AI friends? Would anyone have our jobs?

    He stared out the window and admitted that he wasn't sure. If we were going well in the coming years and we kept AI under control, he said, he could imagine a future in which the lives of most people were still largely the same, but where nearby “special economic zones” filled with hyper-efficient robot factories would accelerate everything we needed.

    And if the following years were not going well?

    “Maybe the air would be filled with pollution and would people be dead?” he said casually. “Something like that.”

    A risk of dramatizing your AI forecasts in this way is that if you are not careful, measured scenarios can come to Apocalyptic fantasies. Another is that by trying to tell a dramatic story that attracts people's attention, you run the risk of missing more boring results, such as the scenario in which AI is generally well worn and does not cause many problems for anyone.

    Although I agree with the authors of “AI 2027” that powerful AI systems are coming soon, I am not convinced that superhuman AI coders will automatically pick up the other skills needed to pick up their way to general intelligence. And I am on my guard for predictions that assume that AI preliminary output will go smoothly and exponentially, without major bottlenecks or dripping away.

    But I think this kind of prediction is worth doing, even if I don't agree with some of the specific predictions. If powerful AI is really around the corner, we all have to begin to propose a number of very strange futures.