The subtitle of The Doom Bible that is published by AI extinguish elieger Yudkowsky and Nate Soares later this month is: “Why superhuman AI would kill us all.” But it should be real “why superhuman AI will kill us all” because even the co-authors do not believe that the world will take the necessary measures to prevent AI from eliminating all non-super people. The book is more than dark, reading when notes scribbled in a vague illuminated prison chat The night for an execution of a dawn. When I meet this self -proclaimed cassandras, I ask them downright when they believe that they will personally meet their goals through a machination of super intelligence. The answers come quickly: “yes” and “yup.”
I am not surprised because I have read the book – the title is by the way If someone builds it, everyone dies. Yet it is a shock to hear this. For example, it is one thing to write about cancer statistics and something completely different to talk about processing a fatal diagnosis. I ask them how they think the end will come for them. Yudkowsky initially avoids the answer. “I don't spend much time proposing my downfall, because it doesn't seem like a useful mental idea to tackle the problem,” he says. He admits under pressure. “I would guess that I suddenly fell over to death,” he says. “If you want a more accessible version, something is with the size of a mosquito or perhaps a dust mites on the back of my neck, and that's that.”
The technical details of his imagined fatal blow supplied by an AI-driven dust mites are inexplicable, and Yudowsky does not think it is worth finding out how that would work. He probably couldn't understand it. Part of the central argument of the book is that super intelligence will come up with scientific things that we cannot understand more than caves that people can imagine. Co -author Soares also says that he proposes that the same will happen, but adds that, like Yudkowsky, he does not spend much time on the details of his downfall.
We don't have a chance
Restraint to visualizing the circumstances of their personal downfall is a strange thing to hear from people who have just coordinated an entire book everyone's Downfall. For Doomer-Porn lovers, If someone builds it Is reading an appointment. After I zerpan the book, I understand the blurness of nailing the method with which AI ends our lives and all human lives afterwards. The authors speculate a bit. Cook the oceans? Block the sun? All guesses are probably wrong because we are locked up in a mentality of 2025 and will think about Eonen Vooruit.
Yudkowsky is the most famous apostate AI from AI and switches from researcher to grim mower years ago. He even did a TED talk. After years of public debate, he and his co -author have an answer for every counter argument that is launched against their terrible prediction. To begin with, it may seem contraindic that our days are numbered by LLMS, who often stumble on simple arithmetic. Don't be fooled, the authors say. “Ais will not remain stupid forever,” they write. If you think that super intelligent AIS will respect boundaries, people forget, they say. As soon as models begin to learn to become smarter, AIS itself will develop 'preferences' that do not match what we humans want them to prefer. In the end they will not need us. They will not be interested in us as discussion partners or even as pets. We would be a nuisance and they would like to eliminate us.
The fight will not be a fair. They believe that AI might need human help in the first instance to build its own factories and laboratories – easily done by stealing money and bribing people to help. Then it will build things that we cannot understand, and that stuff will end us. “Somehow”, write these authors, “The world fades to black.”
The authors see the book as a kind of shock treatment for human humanity from his complacency and take the drastic measures needed to stop this unimaginably poor conclusion. “I expect to die from this,” says Soares. “But the fight is not over until you are actually dead.” Too bad, that the solutions they propose to stop the destruction still seem far -fetched than the idea that software will kill us all. It all comes down to this: press the brakes. Check data centers to ensure that they do not cherish super intelligence. Bomb those who do not follow the rules. Stop publishing papers with ideas that accelerate the Mars to super intelligence. They would have forbidden, I ask them, the article 2017 about transformers who started the generative AI movement. Oh yes, they would have done that, they respond. Instead of Chat-Gpt they want Ciao-Gpt. Good luck stopping this trillion dollar industry.
Opportunities
Personally, I don't see my own light sniffing through a bite in the neck through a super-advanced Motor. Even after reading this book, I don't think it is likely that AI will kill us all. Yudksowky has previously brought to Harry Potter fan fiction, and the imaginative extinction scenarios that he spins are too strange for my void human brain to accept. My gamble is that even if super intelligence wants to get rid of us, it will stumble in carrying out his genocidal plans. Ai may be able to beat people in a fight, but I bet against it in a struggle with Murphy's law.
Yet the catastrophe theory does not seem like impossibleEspecially since nobody really set a ceiling for how smart AI can become. Studies also show that advanced AI has picked up many of the annoying properties of humanity, even to consider blackmailing to avert retaining, in one experiment. It is also disturbing that some researchers who spend their lives on building and improving AI think that there is a non -trivial chance that the worst can happen. One survey indicated that almost half of the AI scientists who responded in the chance that a kind of wipeout is linked as a 10 percent chance or higher. If they believe that, it is strange that they go to work every day to make Agi happen.
My feeling tells me the scenarios that Yudkowsky and Soares Spin are too bizarre to be true. But I can't be it Certainly They are wrong. Every author dreams that his book is a lasting classic. Not so much these two. If they are right, no one will be around to read their book in the future. Just a lot of resolutive bodies that once felt a slight pinch on the back of their neck, and the rest was silence.