Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

NYPD's Big Artificial-Intelligence Reveal

The nation’s largest police force has developed a first-of-its-kind algorithm to track crimes across the city and identify patterns. Privacy advocates worry it will reinforce existing racial biases.

NYPD
(Shutterstock)

SPEED READ:

  • NYPD revealed this month that it has been using artificial intelligence to track crimes and spot patterns since late 2016.
  • The first-in-the-nation technology is called Patternizr. 
  • Privacy advocates worry it will reinforce existing racial biases.
 
The details of the crime were uniquely specific: Wielding a hypodermic syringe as a weapon, a man in New York City attempted to steal a power drill from a Home Depot in the Bronx. After police arrested him, they quickly ascertained that he'd done the same thing before, a few weeks earlier at another Home Depot, seven miles away in Manhattan.

It wasn't a detective who linked the two crimes. It was a new technology called Patternizr, an algorithmic machine-learning software that sifts through police data to find patterns and connect similar crimes. Developed by the New York Police Department, Patternizr is the first tool of its kind in the nation (that we know about). It's been in use by NYPD since December 2016, but its existence was first disclosed by the department this month. 

“The goal of all of this is to identify patterns of crime," says Alex Chohlis-Wood, the former director of analytics for NYPD and one of the researchers who worked on Patternizr. He is currently the deputy director of Stanford University’s Computational Policy Lab. "When we identify patterns more quickly, it helps us make arrests more quickly.”

Many privacy advocates, however, worry about the implications of deploying artificial intelligence to fight crimes, particularly the potential for it to reinforce existing racial and ethnic biases. 

New York City has the largest police force in the country, with 77 precincts spread across five boroughs. The number of crime incidents is vast: In 2016, NYPD reported more than 13,000 burglaries, 15,000 robberies and 44,000 grand larcenies. Manually combing through arrest reports is laborious and time-consuming -- and often fruitless.

“It’s difficult to identify patterns that happen across precinct boundaries or across boroughs,” says Evan Levine, NYPD's assistant commissioner of data analytics.

Patternizr automates much of that process. The algorithm scours all reports within NYPD's database, looking at certain aspects -- such as method of entry, weapons used and the distance between incidents -- and then ranks them with a similarity score. A human data analyst then determines which complaints should be grouped together and presents those to detectives to help winnow their investigations.

On average, more than 600 complaints per week are run through Patternizr. The program is not designed to track certain crimes, including rapes and homicides. In the short term, the department is using the technology to track petty larcenies. 


The NYPD used 10 years of manually collected historical crime data to develop Patternizr and teach it to detect patterns. In 2017, the department hired 100 civilian analysts to use the software. While the technology was developed in-house, the software is not proprietary, and because the NYPD published the algorithm, “other police departments could take the information we’ve laid out and build their own tailored version of Patternizr,” says Levine.

Since the existence of the software was made public, some civil liberties advocates have voiced concerns that a machine-based tool may unintentionally reinforce biases in policing.

“The institution of policing in America is systemically biased against communities of color,” New York Civil Liberties Union legal director Christopher Dunn told Fast Company. “Any predictive policing platform runs the risks of perpetuating disparities because of the over-policing of communities of color that will inform their inputs. To ensure fairness, the NYPD should be transparent about the technologies it deploys and allow independent researchers to audit these systems before they are tested on New Yorkers.”

New York police point out that the software was designed to exclude race and gender from its algorithm. Based on internal testing, the NYPD told Fast Company, the software is no more likely to generate links to crimes committed by persons of a specific race than a random sampling of police reports. 

But Gartner analyst Darin Stewart, who authored a paper on the bias of artificial intelligence last year, said the efforts to control for racial and gender biases don't go far enough. 

"Removing race and gender as explicit factors in the training data are basically table stakes -- the necessary bare minimum," Stewart said in an interview with the site Tech Target. "It will not eliminate -- and potentially won't even reduce -- racial and gender bias in the model because it is still trained on historical outcomes."

The implications are troublesome, Stewart told the site.

"As Patternizr casts its net, individuals who fit a profile inferred by the system will be swept up. At best, this will be an insult and an inconvenience. At worst, innocent people will be incarcerated. The community needs to decide if the benefit of a safer community overall is worth making that same community less safe for some of its members who have done nothing wrong."

The NYPD has come under fire before for using Big Data technology to help fight crimes. In 2016, the Brennan Center for Justice took legal action against the department over its use of predictive policing software. This past December, the New York State Supreme Court ordered the department to release records about its testing development and use of predictive policing software.