Skip to content

The CEO of Google DeepMind thinks that AI will make people less selfish

    If you reach a point where the progress has exceeded the possibility of making the systems safe, would you take a break?

    I don't think today's systems are an existential risk, so it's still theoretical. The geopolitical questions can even become more difficult. But since sufficient time and sufficient care and thoughtfulness, and the use of the scientific method …

    If the time frame is as tight as you say, we don't have much time for care and thoughtfulness.

    We not have a lot of time. We are increasingly putting resources in security and things like Cyber ​​and also research into, you know, verifiability and understanding these systems, sometimes called mechanistic interpretability. And then at the same time we must also have social debates about institutional building. How do we want governance to work? How are we going to get international agreement, at least about some basic principles about how these systems are used and implemented and also built?

    How much do you think AI will change or eliminate the job of people?

    What generally tends to happen are new jobs that are created that new tools or technologies use and are actually better. We will see if it is different this time, but in the coming years we have these incredible tools that overload our productivity and actually make ourselves almost a bit superhuman.

    If Agi can do everything that people can do, it seems that it could also do the new jobs.

    There are many things that we don't want to do with a machine. A doctor can be helped by an AI tool, or you could even have an AI species doctor. But you would not want a robot nurse – there is something about the human empathy aspect of that care that is particularly humanistic.

    Tell me what you imagine if you look at our future in 20 years and, according to your prediction, is AGI everywhere?

    If everything goes well, we must be in an era of radical abundance, a kind of golden age. AGI can solve what I make root button problems in the world call terrible diseases, much healthier and longer lifespan, finding new energy sources. If that all happens, then it should be an era of maximum human flowering, where we travel to the stars and colonize the Milky Way. I think that will happen in 2030.

    I am skeptical. We have an incredible abundance in the Western world, but we don't distribute it fairly. Regarding solving major problems, we do not need answers as solving. We don't need AGI to tell us how we can solve climate change – we know how. But we don't do it.

    I agree. As a species, we have not been good in cooperation. Our natural habitats are destroyed, and it is partly because it would require people to make sacrifices, and people don't want that. But this radical abundance of AI will make things feel like a non-no-sum game

    Agi would human behavior change?

    Yes. Let me give you a very simple example. Watt access becomes a huge problem, but we have a solution – dismissal. It costs a lot of energy, but if there was renewable, free, clean energy [because AI came up with it] From Fusion you suddenly solve the problem of water access. Suddenly it is no longer a zero-Sum game.