Count me as one who supports the use of teleoperated, or autonomous, robots by cops in certain violent cases.
Critics call them “killer robots,” but they could just as easily be described as non-human crime-intervention devices, for that is what they are and do. That such robots work is uncontested. Six years ago, a sniper in Dallas, holed up on the second floor of a building, was deliberately targeting cops, taunting them as he shot them. After he had murdered five officers—all White—Dallas Police Chief David Brown felt he had no choice but to deploy a robot, armed with a pound of C4 explosive, to take the sniper out.
It worked. Bye-bye sniper. When critics piled on Chief Brown, his reply was unequivocal. “This wasn't an ethical dilemma, for me. I'd do it again to save our officers.”
As many of you know, San Francisco’s Board of Supervisors voted on Tuesday to approve the use of autonomous robots by the Police Department. That led to an almost instantaneous outburst of rage by civil-libertarian and civil rights types, who complained that the devices are somehow evil, immoral and wrong. The critics included the San Francisco Chronicle, the self-appointed arbiter of all-things-social-justice. In an op-ed piece yesterday, which the entire editorial board signed off on, the paper was violently against the idea. They came up with criticisms that were, frankly, hysterical. What if someone hacks the robot? What if the robot accidentally kills a civilian? Never mind that the scenarios for the use of robots target them specifically toward violent individuals in the act of committing a crime (such as the Dallas sniper) in which civilian casualties are virtually impossible; or that the potential use of them is small to the vanishing point. The Chronicle concluded with its usual array of self-justifying rationales: “the department’s arguments…were weak.” Weak according to whom—the reporter who wrote these words? “Assistant Chief Lazar failed to present a convincing case.” Failed according to whom—the reporter? Presenting such statements as “factual” is clearly misleading.
The op-ed concluded that “The public deserved a robust discussion” of the issue [but]“That didn’t happen.” Yet I suspect that no discussion by the Police would have been “robust” enough for the Chronicle’s editorial board. They went in dead-set against using autonomous robots, and nothing they heard—nothing they could have heard—would have convinced them otherwise.
Using robots surely is a complicated dilemma and I’m not suggesting there should be no discussion. But it seems to me that many people enter the conversation with biases against the police that incline them to view with suspicion any technology, such as license plate readers or overhead cameras, that makes the cop’s job easier and keeps the public safer. To be sure, there are moral issues involving the use of robots, but I would argue they are little different from those involved in arming cops with lethal weapons in the first place. If the problem is that we can’t trust a robot, realize that a cop or police technician controls the robot; it’s not just some aberrant machine that can go rogue.
As the science of autonomous robots improves, so will their safety and efficacy. It would be lovely, I think, to give police a tool with which they can actually interfere with, and stop, crime as it’s happening, without putting their own lives in danger. Besides, it’s only a matter of time, with 3-D printing and all that, before criminals themselves have access to, and use, robots. UCLA scientists already are at work designing 3-D printed autonomous robots. Soon, perhaps, autonomous robots will be as common as cell phones. Let’s give our cops a head start on deploying them.
Steve Heimoff