Skip to content

A Battlefield AI Company Says It’s One of the Good Guys | WIRED

    Instead, that tagline says less about what the company does and more about why it does it. Helsing’s job postings are rife with idealism and call for people with a belief that “democratic values ​​are worth protecting.”

    Helsing’s three founders speak of the Russian invasion of Crimea in 2014 as a wake-up call that all of Europe had to be ready to respond to Russian aggression. “I started to worry more and more that we are falling behind with the key technologies in our open societies,” says Reil. That feeling grew in 2018 when he watched Google employees protest a deal with the Pentagon that would have allowed Google to help the military use AI to analyze drone imagery. More than 4,000 employees signed a letter claiming that it was morally and ethically irresponsible for Google to support military surveillance, and its potentially deadly consequences. In response, Google said it would not renew the contract.

    “I just didn’t understand the logic of it,” says Reil. “If we want to live in open and free societies, be who we want to be and say what we want to say, we need to be able to protect them. We cannot take them for granted.” He worried that if Big Tech, by all its resources, were discouraged from working with the defense industry, the West would inevitably fall behind. “I felt like if they don’t do it, if the best Google engineers don’t want to work on this, then who will?”

    It’s usually hard to tell if defense products work the way their makers say they do. Companies that sell them, including Helsing, claim it would compromise the effectiveness of their tools if they were transparent about the details. But as we talk, the founders are trying to project an image of what makes the AI ​​compatible with the democratic regimes it wants to sell to. “We value privacy and freedom, and we would never do things like facial recognition,” says Scherf, who claims the company wants to help military personnel recognize objects, not people. “There are certain things that are not necessary for the defense mission.”

    But creeping automation in a deadly industry like defense still raises thorny issues. If all Helsing’s systems offer is greater battlefield awareness that helps armies understand where targets are, that’s no problem, says Herbert Lin, a senior researcher at Stanford University’s Center for International Security and Cooperation. But once this system is in place, he says, decision makers will be under pressure to connect it to autonomous weapons. “Policymakers should resist the idea of ​​doing that,” says Lin, adding that humans, not machines, should be responsible if mistakes are made. If AI “kills a tractor instead of a truck or a tank, that’s bad. Who will be held responsible for this?”

    Riel insists that Helsing does not make autonomous weapons. “We make the opposite,” he says. “We create AI systems that help people better understand the situation.”

    While operators can use Helsing’s platform to take down a drone, right now it’s a human who makes that decision, not the AI. But there are questions about how much autonomy people really have when working closely with machines. “The less you make users understand the tools they’re working with, they treat them like magic,” says Jensen of the Center for Strategic and International Studies, who argues that this means military users can trust AI too much or too little.