It’s a week a long time in politics – especially when it comes to whether it’s okay to give robots the right to kill people on the streets of San Francisco.
In late November, the city’s board of trustees gave local police the right to kill a crime suspect using a remote-controlled robot if they felt that inaction would endanger the public or police. bring. The justification used for the so-called “killer robot plan” is that it would prevent atrocities such as the 2017 Mandalay Bay shooting in Las Vegas, which killed 60 victims and injured more than 860 others, in San Francisco would take place.
But just over a week later, those same lawmakers reversed their decision, sending the plans back to a committee for further review.
The reversal is due in part to the massive public outcry and lobbying that resulted from the initial approval. Concerns were raised that removing people from important life-and-death matters was a step too far. On 5 Deca protest took place outside San Francisco City Hall, while at least one supervisor who initially approved the decision later said they regretted their choice.
“Despite my own deep concerns about the policy, I voted for it after additional guardrails were added,” said Gordon Mar, a supervisor in San Francisco’s Fourth District, tweeted. “I regret it. I’ve become increasingly uncomfortable with our vote and the precedent it sets for other cities without an equally strong commitment to police accountability. I don’t think it’s a step forward in pushing state violence further away, more distant and less human.”
The question asked by supervisors in San Francisco is essentially about the value of a life, said Jonathan Aitken, an associate professor of robotics at the University of Sheffield in the United Kingdom. “The action of using deadly force is always deeply considered, both in police and military operations,” he says. Those deciding whether or not to take an action that could cost a life need important contextual information to make that judgment in an informed manner – context that remote control can lack. “Little details and elements are crucial, and the spatial separation removes them,” says Aitken. “Not because the operator may not consider them, but because they may not be included in the data presented to the operator. This can lead to errors.” And mistakes, when it comes to deadly force, can literally mean the difference between life and death.
“There are a lot of reasons why arming robots is a bad idea,” said Peter Asaro, an associate professor at The New School in New York who studies the automation of policing. He believes the decision is part of a wider movement to militarize the police. “You can imagine a possible use case where it’s extremely useful, like hostage-taking, but there’s all kinds of mission creep,” he says. “That is detrimental to the public, and particularly to communities of color and poor communities.”
Asaro also downplays the suggestion that weapons on the robots could be replaced with bombs, saying that the use of bombs in a civilian context can never be justified. (Some police departments in the United States are currently using bomb-wielding robots to intervene; in 2016, the Dallas Police Department used a bomb-laden bot to kill a suspect in what experts called an “unprecedented” moment.)