Internet Explorer 11 is not supported

For optimal browsing, we recommend Chrome, Firefox or Safari browsers.

What Happens When Evidence-Based Policymaking Meets the Real World

Problems arise.

papers-stacks
(Shutterstock)
You can’t swing a cat in public administration circles these days without hitting some mention of “evidence-based policymaking.” Sara Dube, director of the Pew-MacArthur Results First Initiative, assures us that all 50 states have made at least some effort to gather solid, data-based information that can help executive and legislative leaders make good decisions.

Not every state relies on evidence to the same degree. Some, including Colorado, Minnesota, New Mexico and Washington, have rigorous processes for applying evidence in their budgeting and decision-making. At the other end of the spectrum are West Virginia, New Hampshire and the Dakotas, which don’t really incorporate data in any formal or organized way. Most states fall somewhere in between, and many of them lack the funding or staff it would take to apply the approach routinely. 

But even with money and staff, evidence-based policymaking can be fraught with peril. For one thing, once evidence seems to indicate that a program is successful, public officials often declare victory and move on. “But there are few easy wins with health care, poverty, education and so on,” says Gary VanLandingham, a professor at Florida State University and a national expert on evidence-based policymaking. “No single program or process is a silver bullet that will solve all problems.  But everyone loves silver bullets.”

As a result, even though a strong body of evidence may point to one approach for solving a problem, that doesn’t mean it’s the only route to success, and a search for multiple solutions is often worth the time and effort.

Another issue: If evidence shows that something works in, say, Albuquerque, people assume it will be a success in Pittsburgh as well. “You need to implement these programs with fidelity while still adapting them to new contexts,” says VanLandingham. “That is very hard to do. When efforts succeed in one jurisdiction, that doesn’t mean they will in another.”

In other words, one size does not fit all. Sometimes that’s because different jurisdictions simply have different needs and different demographics. But often, the problem is that the programs themselves aren’t replicated consistently when they’re exported from one place to another. 

Consider a situation in Colorado, as described in a report published by the IBM Center for the Business of Government. The state had adopted a federally backed home-visiting program for at-risk pregnant women and parents with young children. The model had shown success in other states. Yet when Colorado subsequently evaluated its program to see how well it was working, the state learned that “not all home-visiting providers were performing the same or implementing the model as intended,” according to the IBM paper, and that the “required monthly visits were not always happening.” 

Even beyond the pitfalls of trying to replicate success, there’s the underlying question of what “evidence-based” even means. The gold standard for data, of course, would be randomized control trials. But those kinds of experiments are costly and time-consuming, requiring resources that states don’t have to spare. And while randomized trials might technically be the best way to measure a program’s impact, politically they’re a tough sell. Who wants to be in the group that’s not receiving a potentially beneficial new service? Who wants to drive on the control road that doesn’t have the new technology aimed at reducing traffic fatalities?

An example comes to mind from Flint, Mich., after its drinking water debacle became a national outrage. The city wanted to develop a plan for alleviating the problem, so water officials ran lead tests on one set of pipes, but not on others. Individual citizens didn’t know whether their pipes had been tested or not; everyone knew they’d had a 50-50 chance of being in the control group. No one was happy.

So there’s a dilemma even with the gold standard for finding evidence upon which to base policy decisions. Nobody wants to find out they’re getting the lead, instead of the gold.