And yet, some powerful evidence recently crossed our desks that caused us to dig more deeply into the validity of that belief.
Our epiphany came from reading the recent Equipt to Innovate survey, a joint effort by Governing and the nonprofit Living Cities, covering 66 cities of various size. One of the questions asked was, “Does the city have a policy or ordinance establishing the review and modification of practices, programs, and/or policies that have consistently failed to achieve desired outcomes, using rigorous data analysis and evaluation?”
You’d like to think that most cities would have said yes. But in fact, nearly four out of five respondents said no, they weren’t using data analysis to look at why certain programs were consistently failing to achieve their desired outcomes.
That’s not how this was supposed to work. The dream of good data was that governments would actually use it to evaluate which programs were working, and which were not. An article in Government Finance Review nearly a decade ago said that’s “exactly what Budgeting for Outcomes and other priority-based budgeting methods were designed for.”
“That was my dream as well,” says Ken Miller, founder of the Change and Innovation Agency, which spends its time on programs that affect low-income families and children. (Miller was also once a Governing columnist, and he published two books with Governing.) When he worked for Missouri Gov. Mel Carnahan in 1998 and 1999, he says “the goal was to use performance management to find programs to get rid of and to invest in.” To Miller’s disappointment, poorly performing programs seemed to live on, despite efforts to zap them.
There are several reasons why the data-rational approach doesn’t work out. For one thing, any effort to scale back or eliminate a program has a built-in enemy: the men and women who have become true believers in the program’s mission, and who might stand to lose their jobs if it goes away. “Their defensive posture is very high,” says Chris Fabian, CEO of ResourceX, a consulting group that works to help cities use their funds most efficiently. “They have ownership over their programs.”
What’s more, analyzing data and evaluating programs requires a lot of resources -- resources even some bigger cities don’t have. Salinas, Calif., is a case in point. It has a population of 160,000. But as Eric Sandoval, the city’s innovation team lead, notes, “a number of cities don’t have the management analysts to even think about data tracking, much less repurposing of programs. They don’t have the staff to lead the charge.” Even if cities somehow could muster up the manpower, he adds, “they don’t have the training.”
There’s another big reason it’s so hard to kill off ineffective programs: Almost all of them had a good reason for getting funded in the first place. Even if they don’t seem to be working, says David Ammons, an expert in the field who now teaches at the University of North Carolina’s School of Government, “the purpose is still sufficiently great that we can’t just walk away unless we can find an alternative.”
Just because a program isn’t achieving its goals doesn’t necessarily mean the best solution is to ditch it. In some cases, it might actually be more effective to increase funding, not cut it. Then, too, there’s the option of outsourcing programs, not necessarily to save money but to see if that alternative works better.
At any rate, there is some cause for optimism at the heart of all of this. Perhaps the real reason performance information hasn’t aided in getting rid of lots of wasteful programs, Miller says, is that governments are not riddled with them in the first place. “It’s a unicorn myth,” he says, “that there are lots of programs that don’t do any good and we’re going to get rid of them.”