Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

The Peril — and Promise — of AI in Criminal Justice

Innovation must come with transparency, safeguards and human oversight. We need to deploy the technology in ways that enhance rather than erode public confidence in the justice system.

Police body camera
Using AI to audit police body camera recordings resulted in more professionalism in interactions between police and the public, according to a new study. (Al Seib/Los Angeles Times/TNS)
Sustaining the rule of law requires a justice system that earns the confidence of everyone for being even-handed and transparent. Our liberty, safety and prosperity depend on trustworthy processes and institutions, not the capricious whims of individuals — or machines.

Advanced artificial intelligence holds both enormous promise and tremendous peril, including significant implications for the justice system. AI is already being used in justice contexts such as policing, pretrial justice, sentencing and corrections. Some applications could improve our ability to prevent, detect and solve crimes.

Yet integrating AI into the justice system raises special specters, where indecipherable algorithms could make decisions that determine life and liberty. At a time when the system is struggling with legitimacy, how can we ensure that AI does not make matters worse and, better still, harness it for good? With AI justice applications rapidly proliferating, this is an urgent question.

In October, our organization, the Council on Criminal Justice, released a report on the implications of AI in criminal justice that captured the thinking of three dozen experts we convened along with the Stanford Criminal Justice Center. As AI technologies continue to evolve, our discussion identified three key considerations that should guide us forward.

First, we must ensure that these technologies and the way they’re used are as transparent as possible. This means striving for glass boxes, not black boxes — that is, favoring models whose underlying algorithms and methodologies can be shared and understood by all involved — and providing for independent third-party auditing and verification.

For example, a court in Washington state properly ruled that an AI technology that enhances a video could not be introduced in a criminal trial in part because no human expert could explain how it did that. This goes to the heart of our fundamental constitutional rights to confront our accusers and, more broadly, to ensure that ordinary people can challenge accusations against them.

The post office scandal that has roiled England for nearly two decades illustrates the danger when secret formulas are wielded by government and its favored vendors, with ordinary citizens left in the dark. In this case, hundreds of lives were ruined when an algorithm mistakenly determined that British mom-and-pop stores that sell postage had defrauded the government.

Second, the safeguards around transparency, fairness and reliability should be commensurate with the liberty interest and irreparability of the decision involved. For example, the bar should be lower for AI technologies that can help prosecutors and defense counsel prepare for trial by sifting through documents, photos and videos to find relevant material for further human analysis, as opposed to use of a tool by police to determine which drivers to pull over or detain with little or no human review.

Finally, there is a need for ongoing human oversight of AI applications deployed in the justice system. While some AI applications, such as those for writing police reports, include safeguards for protecting privacy and checking the reliability of their output, agencies using them can adjust preferences to enable or disable such mechanisms. Meaningful and ongoing oversight is important to ensure that our cherished values are not sacrificed on the altar of efficiency.

In addition to guarding against these dangers, we must recognize the vast potential for AI to prevent crime and increase trust in the system. A 2023 survey found that just 49 percent of Americans think the justice system is fair, down from 66 percent in 2013, and the system currently suffers from significant human error in everything from witness identification to offender risk assessment.

Some AI applications could make the system more accurate, effective and reliable, and therefore more trustworthy. In a flash, AI applications can comb through countless hours of video to find those moments where a police officer used force or raised their voice, as well as those where an officer de-escalated after the suspect used an expletive or raised their voice. A new study finds that using AI to audit police body camera recordings resulted in more professionalism in interactions between police and the public.

Given this capacity for quickly identifying unusual patterns, several AI technologies hold potential to boost clearance rates, which have declined dramatically over the last half-century. Murders that were solved at rate of more than an 80 percent in the 1960s are now solved only about half the time, and clearance rates for property crimes hover around 12 percent.

Just as they can help pinpoint perpetrators, AI technologies also hold vast potential for combing through volumes of unexamined evidence to identify wrongful convictions, which also contribute to distrust of the system.

Perhaps the greatest contribution AI can make to building trust is by freeing up the time of justice system actors now consumed with rote work. Instead of devoting countless hours searching for the proverbial needle in a haystack, criminal justice professionals could invest that time in activities that nurture confidence and understanding, whether the interaction is between a police officer and a citizen, a parole officer and a parolee, or a public defender and a defendant.

The adoption of AI will likely be shaped rather than stopped. Our focus must therefore be on ensuring that the technology is deployed in ways that enhance rather than erode public confidence. Given the stakes for our democracy, it would be just as wrongheaded to dismiss the potential benefits of AI as it would be to deploy these technologies without the transparency and guardrails they require.



Governing’s opinion columns reflect the views of their authors and not necessarily those of Governing’s editors or management.
Marc A. Levin is the chief policy counsel for the Council on Criminal Justice. He can be reached at mlevin@counciloncj.org and on X at @marcalevin.
Jesse Rothman is a senior fellow at the Council for Criminal Justice. He can be reached at jrothman@counciloncj.org.