Skip to content

The strangely believable story of a mythical rogue drone

    did you hear about the Air Force AI drone that went rogue and attacked its operators in a simulation?

    The cautionary tale was told late last month by Colonel Tucker Hamilton, chief of AI testing and operations at the US Air Force, during a speech at an aerospace and defense event in London. It apparently involved using the kind of learning algorithm that has been used to train computers to play video games and board games like Chess and Go and using it to train a drone to hunt and destroy surface-to-air missiles.

    “Sometimes the human operator would tell him not to kill that threat, but he got his points by killing that threat,” Hamilton told the London audience. “So what did it do? […] It killed the operator because that person prevented him from achieving his goal.

    Holy T-800! It sounds like exactly the sort of thing that AI experts have begun to warn that ever-smarter and more idiosyncratic algorithms could do. The story, of course, quickly went viral, with several prominent news sites picking it up, and Twitter was soon abuzz with it involved hot takes.

    There’s only one catch: the experiment never happened.

    “The Department of the Air Force has not conducted such AI drone simulations and remains committed to the ethical and responsible use of AI technology,” Air Force spokesman Ann Stefanek reassured us in a statement. “This was a hypothetical thought experiment, not a simulation.”

    Hamilton himself also rushed to set the record straight, saying he “spoke wrong” during his speech.

    To be fair, military personnel sometimes conduct “war game” exercises with hypothetical scenarios and technologies that don’t yet exist.

    Hamilton’s “thought experiment” may also be based on real AI research that revealed similar problems to the ones he describes.

    OpenAI, the company behind ChatGPT – the surprisingly smart and frustratingly flawed chatbot at the center of today’s AI boom – conducted an experiment in 2016 that showed how AI algorithms given a certain target can sometimes misbehave. The company’s researchers found that an AI agent trained to increase his score in a video game that involved driving a boat around started crashing the boat into objects because it turned out to be a way to get more points.

    But it’s important to note that this kind of failure – while theoretically possible – shouldn’t happen unless the system is designed incorrectly.

    Will Roper, former deputy secretary of acquisitions in the US Air Force who led a project to put a gain algorithm in charge of some functions on a U2 spy plane, explains that an AI algorithm simply wouldn’t have the option to are operators in a simulation. That would be like a chess algorithm that can flip the board to avoid losing any more pieces, he says.

    If AI is eventually used on the battlefield, “it will start with software security architectures that use technologies like containerization to create AI ‘safe zones’ and forbidden zones where we can prove the AI ​​can’t go,” Roper says.

    This brings us back to the current moment of existential dread surrounding AI. The rate at which language models like the one behind ChatGPT are improving has alarmed some experts, including many of those working on the technology. weapons and pandemics.

    These warnings are clearly of no help when it comes to dissecting wild stories about AI algorithms turning against humans. And confusion is hardly what we need when there are real issues to address, including ways generative AI can exacerbate societal prejudice and spread disinformation.

    But this meme about misbehaving military AI tells us that we desperately need more transparency about how advanced algorithms work, more research and engineering focused on how to build and deploy them safely, and better ways to help the public understand what’s going on. is deployed. These may prove especially important as armies – like everyone else – rush to make use of the latest developments.