Skip to content

DeepMind has all the ways in which Agi could destroy the world

    While AI -Hype penetrates the internet, technology and business leaders are already looking at the next step. AGI, or artificial general intelligence, refers to a machine with human -like intelligence and possibilities. If today's AI systems are on their way to Agi, we need new approaches to ensure that such a machine does not work against human interests.

    Unfortunately we have nothing as elegant as the three laws of robotics of Isaac Asimov. Researchers from DeepMind have worked on this problem and have issued a new technical article (PDF) in which it is explained how you can safely develop Agi, which you can download at your leisure.

    It contains an enormous amount of details and clocks on 108 pages for References. Although some in the AI ​​field believe that Agi is a pipe dream, the authors of the DeepMind Paper project that it could happen in 2030 can happen. With that in mind they wanted to understand the risks of a human -like synthetic intelligence, which they acknowledge that they can lead to “serious damage”.

    All ways in which Agi could harm humanity

    This work has identified four possible types of AGI risks, together with suggestions about how we can improve these risks. The DeepMind team, led by co-founder of the Shane Legg company, categorized the negative AGI results as abuse, wrong alignment, errors and structural risks. Abuse and incorrect alignment are discussed in detail in the newspaper, but the last two are only treated for a short time.

    Table of AGI risks

    The four categories of AGI risk, as determined by DeepMind.

    Credit: Google DeepMind

    The four categories of AGI risk, as determined by DeepMind.


    Credit: Google DeepMind

    The first possible issue, abuse, is fundamentally comparable to the current AI risks. However, because AGI will by definition be more powerful, the damage it could do is much greater. A Ne'er-Do-Well with access to AGI can, for example, abuse the system to cause damage by asking and exploiting zero-day vulnerabilities or to create a designer virus that can be used as a Biowapon.