Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

The Rising Call for Regulating Police Robots

Police have increasingly adopted drones and ground robots to supplement their work. But departments often lack clear policies on the tools’ uses.

Spot the Robot Police Dog.
Image from Boston Dynamics
In Brief:

  • Police have used drones and robots over the years for purposes ranging from getting an eye-in-the-sky view of accidents to remotely tear-gassing a hard-to-reach suspect. Many anticipate police use of robot and drone technologies to increase. 
  • Most police departments lack policies and standards around acceptable use of drones and robots, says Matthew Guariglia, senior policy analyst at the Electronic Frontier Foundation. 
  • A policy framework from New York University Law School’s Policing Project offers recommended guidance for setting limits and safeguards on use of such tools.


A 400-pound, roughly five-foot-tall robot monitored New York City’s busy Times Square Subway Station for half of 2023. The bot was equipped with cameras and a button to press for help. It was meant to deter crime at the station by serving as a kind of surveillance reminder. At the launch of the pilot program, Mayor Eric Adams touted the robot as something that, unlike humans, would work with “no bathroom breaks, no meal breaks.”

Police bots have at times taken a far less passive role, too. In 2024, Lubbock, Texas, used a robot to spray tear gas into a room, smoking out a barricaded suspect. When the suspect fled the room, operators drove the robot on top of him, pinning him down. In Los Angeles County in 2016, a claw-equipped robot stole a gun away from an armed suspect. That same year, Dallas made the unprecedented move of using a robot to kill an active shooter, by setting off an explosive — a decision a grand jury ultimately ruled was justified.

And while less novel than ground robots, drones have seen play in many departments, whether to help first responders better locate victims at the scene of an accident, search for criminal suspects or even monitor city beaches for potential shark activity.

“The use of robots and drones by law enforcement has really exploded in recent years. They used to be kind of a niche tool, and they're quickly becoming a mainstream policing tool,” says Max Isaacs, director of technology law and policy at the Policing Project, part of the New York University School of Law.

But while acquisition of drones and robots has risen, regulations around their use have lagged behind.

When it comes to rules and standards around acceptable uses of police drones and robots, “in most departments in the country, there is absolutely none. There are no standards. There are no policies,” says Matthew Guariglia, senior policy analyst at the Electronic Frontier Foundation (EFF). That creates risk of misuse, including deploying robots in ways that hurt someone or using them to conduct surveillance in ways that might conflict with First Amendment rights, Guariglia says.

The Policing Project is trying to change that and recently released a policy framework it hopes could guide jurisdictions in adding such rules.

Why Robots Need a New Rulebook


Robots vary widely in capabilities and levels of autonomy, as well as how they’re put to use. Proponents of such tech say police can use robots to keep themselves out of harm's way or get better insights as they surveil an area.

But those who are wary of such tools say conducting police-resident interactions via a robot can dehumanize the relationship, and that using force may come easier when the interaction feels more like a video game than a face-to-face event.

“Police departments say they want to re-earn trust after the kinds of protests of 2020 and yet it seems like technology companies are helping them invent ways that they have to interact with the public less and less,” EFF’s Guariglia says.

Robots also can be hacked or malfunction, and their use hasn’t always gone smoothly: In 2011, a San Francisco Police Department robot dropped and ran over antiquated grenades (fortunately, none detonated).

Some police use of robots has been accused of being a gimmick intended to show progress and innovation, rather than something that actually improves safety. New York City’s 400-pound robot faced such criticisms, with some observers noting the station already had surveillance cameras and that, despite intentions, the robot was rarely seen leaving its charging pad to patrol or seen without a human accompanying it. After the pilot ended, the city did not return the bot to use.

Surveillance is another risky issue: Guariglia believes drones can be useful for purposes like finding someone lost in the woods as part of a search and rescue mission and for following cars, to prevent need for high-speed car chases that might result in crashes. But the legality (and morality) of certain types of surveillance remains up in the air. For example, police don’t need a warrant before flying a helicopter over a person’s home to conduct aerial surveillance. But that surveillance is very different with a drone that can surreptitiously fly right up to a window to peer through, zoom in with telescoping lenses and possibly use thermal imagery to capture additional information.

“These justifications from 34 years ago about aerial surveillance, these [court] cases rely on assumptions that really don't apply, or don't apply well, in the context of drones,” Isaacs says. “ I don't think it's enough to say, ‘Well, that issue is decided, and the courts have said that overflights are fine.’ I think we actually need to go back and question whether that is a sound rule in light of what drone technology can do today.”

Fewer than half of states have policies regarding police use of drones, Isaacs says.

And while cops are permitted to use force or frisk residents for weapons to protect an officer’s personal safety, a robot doesn’t have to fear for its life, raising questions about whether, or in what situations, bots should be permitted to make physical contact or use force against a human.

Appropriate robot use of force remains a disputed topic. In 2022, San Francisco supervisors proposed letting police robots use lethal force in certain circumstances, then reversed course after community pushback, instead banning such uses.

In another case, the Massachusetts American Civil Liberties Union (ACLU) chapter and Boston Dynamics — a company famous for robotic dogs used by numerous police departments — advocated for a bill that would have banned police and others from making or using weaponized robots and drones. The bill was introduced in the 2023-2024 session but ultimately did not pass. When a similar bill emerged in California last fall, Gov. Gavin Newsom vetoed it, arguing that law enforcement needs the option to “sometimes use remotely operated robots to deploy less-lethal force.” Ban attempts continue, however, with a current New York state bill aiming to prohibit not only weaponized police robots but also non-weaponized ones that “may potentially cause injury.”

As for the Policing Project’s advice, Isaacs’ team supports a moratorium on robot use of force: “Our view is not that there can never be a scenario in which weaponizing robots might be appropriate, but at least now, the evidence isn’t there, and there needs to be far more research into the issue before we take that step.”

Writing a New Plan


The Policing Project’s proposed policy framework argues that robots shouldn’t conduct lethal or non-lethal force and should only physically make contact with a human in certain circumstances, like to protect them during a search and rescue effort.

Among the core recommendations: Even if robots can move autonomously, a human must oversee, and be accountable for, its interactions. Warrants generally should be required before robots or drones surveil or enter a property that would require a human officer to get a warrant — and for flyover surveillance of private property. Aircraft flyovers don’t require warrants, but the Policing Project says that drones’ surveillance capabilities intrude more on privacy and so require a different treatment.

Police should alert the public about where they’re going to send patrol robots and why. Such disclosure could defuse concerns that certain neighborhoods will get human officers while others get robots, creating an uneven experience of policing. Plus, police should report back to oversight groups, policymakers and perhaps the public about instances where robots were used and the kind of data collected. The Policing Project’s policies also recommend regular audits to ensure departments are following protocol.

While police robot skeptics may worry that passing usage policies legitimizes the technology’s deployment, the alternative is seeing robots and drones used with few guidelines or restrictions, Isaacs says. Meanwhile, police may fear regulations could be too restrictive, but rules to reduce risks can prevent the kinds of bad outcomes that might lead to the tools being banned entirely.

“Everyone has something to lose if we do not enact sound regulation of police drones and robots,” Isaacs says.
Jule Pattison-Gordon is a senior staff writer for Governing. Jule previously wrote for Government Technology, PYMNTS and The Bay State Banner and holds a B.A. in creative writing from Carnegie Mellon.