It’s the dark, wee morning hours of July 8, 2016 and Micah Xavier Johnson is holed up on the second floor of El Centro College with a rifle, singing. Eleven people are injured, five police officers are dead. After two hours, the Dallas Police Department has given up on negotiations. A Special Weapons and Tactics team is positioned down the hallway from Johnson, working a pound of C-4 plastic explosive into the arm of the department’s Remotec Andros Mark 5A-1. It’s the C-3PO of police robots. It has video cameras and an arm, but aside from being able to blind someone with a flash or dole out a nasty pinch, it is not a fighter. It was made for bomb disposal, not delivery. This morning, for the first time in police-robot history, it will be used to take a human life.
Afterward, news headlines screamed KILLER ROBOTS HAVE ARRIVED. But those headlines miss the point. The robot wasn’t sentient. It didn’t kill somebody; somebody used it to kill somebody else. Much of the debate focused on the robot and others like it: how heavy they are, how fast they are, how their tiny electrical muscles work. These details are superficial, but our collective nervousness that someday robots would call the shots ran deep. Artificial intelligence can be as much a threat as a benediction—but what happened in Dallas had nothing to do with AI.
Under its most orthodox definition, AI is the replication of a biological mind. Philosophers and software engineers can’t agree about whether AI could ever be more than a convincing charade, much less a staple of policing. Under its most liberal definition, AI is what you’d find trying to shoot or outrun you in a video game. Even that is beyond the scope of the world’s police robots. The Andros used in Dallas, like every other police bot, is remote-controlled, like a Tyco toy car.
No one from the Dallas Police Department would speak with me (perhaps because their robot overlords wouldn’t let them), but Tim Dees, a former Nevada police officer and former criminal-justice professor, understands the events of July 8. “The Dallas situation was fairly unique,” says Dees, who writes for PoliceOne, an online publication for law enforcement officers. “The shooter was in an area where he couldn’t be easily visualized by the cops without exposing themselves to gunfire. He was believed to have ample ammo on him, and he said he had explosives with him.”
Don Hummer, associate professor of criminal justice at Penn State Harrisburg, agrees. “The Dallas incident represented an extremely high, if not the highest, rung on the use-of-force continuum,” he says. He explains Johnson’s position 30 feet down a hallway from the SWAT team, in a computer server room from which he could easily defend the only two doorways. “He’d barricaded himself in a space where further casualties were likely if the police stormed it—or, if they did not, from the subject firing at law enforcement or civilians or detonating the explosives he claimed to possess. The decision to neutralize the subject was a virtual necessity. Is the outcome any different if the perpetrator is felled by a sniper’s bullet or by an explosive device attached to a robot?”
If Dallas is a special case, it raises more theoretical questions than practical ones; namely, does the availability of a robot capable of killing a suspect change police decision making from here on out? Because if it does, we won’t be going back.
Technology changes law enforcement. It happened with Kevlar vests, pepper sprays, Tasers, undercover squad cars, body cameras, in-car laptops, radios, flash-bang grenades, beanbag ammunition rounds and tear gas. America’s police forces also transformed when revolvers were swapped for semiautomatic handguns in the 1980s and when long guns were issued to non-SWAT officers in the 1990s. The domino effect of those changes has never been more evident than it is today.
“When I was investigating allegations of police misconduct in New York City, officers were allowed to carry guns but not Tasers,” says Ryan Calo, assistant professor of law at the University of Washington and co-director of the Tech Policy Lab, which studies the collision of U.S. law with new technologies—specifically robotics and online tech. “They would have to call in a supervisor for a Taser. This is because the NYPD wasn’t sure officers would have the experience or operational awareness to make the decision to use nonlethal force, even though they were trusted with lethal force.”
There’s a strand of thought that goes like this: The less likely it is that a certain degree of force will cause friendly casualties, the more likely it is that someone will authorize a greater degree of force—even when it’s not 100 percent necessary. If it shocks you that a police robot set off a bomb in Dallas, what did you think when a CIA drone fired a missile into a car carrying American citizen Kamal Derwish in Yemen in 2002? Derwish was associating with Al Qaeda. It was at the height of the war on terror, and the CIA’s unmanned aerial vehicles—General Atomics MQ-1 Predators—were being retrofitted to shoot Hellfire air-to-surface missiles at the world’s biggest threats. But it raised a furor nonetheless. Shouldn’t American citizens be arrested (or at least an attempt be made to arrest them) before being gunned down? If not for their sake, then for the integrity of our Constitution? Where is due process in the age of technology?
“Legally, the two scenarios are controlled by wholly different laws, but the ethical considerations are quite similar,” says Ron Sullivan, a Harvard law professor who focuses on civil liberties, criminal law and criminal procedure. “Co-extensive with the militarization of police a decade or so ago has been the increased incidence of use of force, including lethal force. The psychology of warfare is markedly different from the norms that should animate policing.”
The laws governing police use of force, deadly and otherwise, are set out in the 1989 Supreme Court ruling Graham v. Connor, in which a man sued Charlotte, North Carolina police for using excessive force after they observed him quickly enter and leave a convenience store, behavior they found suspicious. Dees explains the Supreme Court’s ruling: “Any use of force must be objectively reasonable in the eyes of the officer, and any subsequent reviewing judicial authority has to consider that officer’s perspective in ruling on the reasonableness of the officer’s actions. This applies whether the use of force is via an empty hand, a firearm or a brick of C-4.” Or a robot.
Robots, bots, brobots, drones, automatons, mechano-men, androids, mandroids, replicants, terminators and UAVs—one can imagine the American executive branch has been into robotics since Tron. The CIA and the Department of Defense began acquiring and developing unmanned aerial vehicles in the early 1980s to the point that, according to the U.S. Navy, there was at least one UAV in the sky during every second of the Gulf War.
Years earlier, in 1966, America’s mad-scientist division, DARPA, funded the first modern unmanned ground vehicle, or UGV. “Shakey” was remote-controlled, wheeled and about as automatic as shoelaces. A human driver—not anything resembling AI—made all the calls. As with many of DARPA’s ideas, the Department of Defense went lukewarm on it until interest in UGVs reemerged in the early 1980s. The vehicles were built to see inside dangerous buildings and territories and to manipulate explosives placed by enemies and suspects. American police departments have been using them for the same purposes for two decades.
Arming these robots isn’t a foreign idea to those who manufacture them. Northrop Grumman, of which Remotec is a subsidiary, has sold accessories that mount a Franchi 612 or Penn Arms Striker 12 combat shotgun to its Andros UGV since 2004, though sales literature refers to the Franchi 612 as a door-breaching tool and the Penn Arms Striker 12 as a delivery system for less-than-lethal rounds. So the ability to arm robots has been there for more than a decade. In reality, most police agencies don’t have ready access to robots; for those that do, the devices are rare and expensive, used mainly for explosives disposal and for entry into areas deemed too hazardous for police officers.
“Robots with guns are impractical,” says Eric Ivers, president of robot manufacturer RoboteX. He says RoboteX has dealt with at least a thousand police departments, including those in San Francisco, Los Angeles, New York City, Chicago, Seattle and St. Louis. “Reloading would be nearly impossible at the point of use, and delay in wireless signal is a possible problem. From my perspective, the biggest problem is the speed with which a gun could move and track a subject. People can move faster than current police robots can track them. By the time the operator could locate, aim and fire, the subject would likely have moved far enough to avoid being hit.”
Two Korean firms, Samsung Techwin and DoDAAM, and one Israeli company, Rafael Advanced Defense Systems, have developed near-autonomous sentry guns for border defense along the Korean Demilitarized Zone and the Gaza Strip, respectively. Software detects and tracks targets through body-heat signatures, but each gun relies on a human operator to fire. Good thing, because an automatic sentry can’t distinguish between friends and enemies; it targets anyone it sees. And because they’re long-range stationary platforms, these devices are unsuitable for police work. Put wheels on one and roll it into a hotel room or a bank, and deficiencies will show up immediately.
“A robot is not agile enough to protect its weapon from being accessed by a hostile,” says Dees. “It’s a relatively simple task to creep up on a remote-controlled vehicle, tip it over, throw a net over it or grab something off it, especially if the vehicle is out of sight of its operator, whose perspective is limited by the vehicle’s camera.”
Neither is a robot good for fighting with explosives. “Robots don’t really need modification in order to be used the way the one in Texas was used,” Ivers says. It isn’t the price tag of the Dallas Police Department’s $151,000 Andros, which survived the explosion, that keeps police from deploying suicide bots, he says. “Any remote-controlled toy car or truck could probably be rigged, driven to a location and detonated. Police have not used explosives in the way Dallas did mostly because there are better ways to accomplish the same goal.”
Armed robots make the decision to use lethal force much easier because the human being is removed from the subject.
Police face situations in which they have to end a life for their own and others’ safety. There’s no getting around that. But those who make that decision should also follow legal and civil regulations. Some, like Sullivan, want police departments to write protocols specifically for the use of armed robots. “Armed police robots, in effect, add an insulation layer between the police officer and the subject. It makes the decision to use lethal force much easier because the human being is removed from the final point of contact with the subject,” says Sullivan. “And my strong intuition is that shared responsibility increases the likelihood of irresponsible decision making.”
For others, the current regulations are enough. According to Hummer, every police department already has in place language relevant to armed, nonautonomous robots, but that language is written exclusively for firearms. Something as simple as a broadening of terms—from “firearm” to “any tool in the police arsenal”—could fix that. “As with any critical incident, there is a supervisory decision-making process whereby a senior administrator has the ultimate responsibility for using deadly force,” he says. In Dallas, it was the chief of police and the mayor. When there isn’t time to call police headquarters, it’s the senior officer on the scene. “We have seen in recent months that every level of officer from top management through line officers can be held accountable for misuse of force,” Hummer continues. “For instance, half a dozen officers were indicted in the Freddie Gray case in Baltimore.” Use of an armed robot, as in Dallas, is subject to the same hierarchy of potentially shared responsibility.
One day robots will be as standard-issue as Kevlar vests and sidearms, particularly for negotiation, surveillance, bomb disposal, door breaching and distraction. For the moment, though, and for foreseeable moments, armed robots are as much an aberration as they are a legal means to an end. “Assuming the robot is tele-operated at all times by a person, I don’t see a particularly greater impact than snipers, full-body armor or other militarized police tactics,” says Calo. “The current constitutional framework is sufficient to address a situation such as Dallas, wherein officers use a robot to kill someone.”
“I don’t think overall police decision making will be affected much as a result,” says Hummer. “Every police incident is a continual flow of circumstances, and no two are ever the same. Discretion is the most critical component of doing police work. I firmly believe policing will be one of the last occupations to have the human element diminished.”
Still, to others, the mere presence of passed-down military robots is enough to affect police-civilian relations. “Society would be much better off if the structural divide between military and civilian police remained distinct,” says Sullivan. That said, starting discussions about autonomous robotics and true AI would be wise. Autonomous robots are coming, without a doubt. And as humanity accelerates its efforts to create artificial intelligence, preparing for that day is smart contingency planning. The moment it arrives will be a watershed because it will, for the first time, shift judgment from the officer using the machine to the machine itself.