Next came CitiStat, Baltimore's own adaptation of this innovation to improve performance in an entire jurisdiction. This prompted similar approaches in other cities -- from the large, such as Atlanta and San Francisco, to the small, such as Palm Bay, Florida, and Somerville, Massachusetts. Then, as Ellen Perlman has noted in Governing, "'Stat' Fever" really got hot. For example, the San Diego district of the U.S. Border Patrol created BorderStat, and the state of Washington created the Government Management Accountability and Performance program.
Each one of these performance strategies -- all of which fit in my general class of "PerformanceStat" -- is different. They have to be. Neither the performance purpose nor the performance context is the same. The leadership team of each jurisdiction and each agency has to adapt the basic principles of PerformanceStat to its own objectives and circumstances.
However, as I've studied over a dozen examples of PerformanceStat, I have been struck by how many programs do not quite appreciate (or at least employ) some of the core principles of the strategy. Below are the five big errors that I have observed.
First, however, I should provide my definition of "PerformanceStat":
A jurisdiction or agency is employing a PerformanceStat strategy if it holds an ongoing series of regular, frequent, periodic and integrated meetings involving the chief executive, or the principal members of the chief executive's leadership team, plus the individual director and the top managers of different subunits. The meetings must focus on the use of data to analyze a subunit's past performance, to follow up on previous decisions and commitments to improve performance, to establish its next performance objectives, and to examine the effectiveness of its overall performance strategies.
This definition is not very restrictive -- lots of management activities fit into it. So I'm not complaining about public managers who fail to implement the details of my own, narrow, idiosyncratic concept. I'm talking about jurisdictions or agencies that miss something very basic.
Error #1: No Clear Purpose. Too often, PerformanceStat is nothing more than the latest government fad. The manager says, "Ooh, cool hammer," and goes looking for a few nails to pound. Instead -- indeed, as always -- managers need to start with a clear purpose, answering such questions as: What results are we trying to produce? What would better performance look like? And how might we know if we had made some improvements? Only after the leadership team has answered these questions can it adapt the PerformanceStat strategy to its own purposes.
Error #2: No One Person Authorized to Run the Meetings. Someone has to run each meeting. And, that someone ought to be the same someone. Unfortunately, the chief executive -- either the elected executive or the agency director -- will be unable to be present throughout every meeting. Consequently, this chief executive needs to delegate, officially and unequivocally, a key deputy to conduct every meeting. Otherwise, there will be no consistency of purpose from one meeting to the next.
Error #3: No Dedicated Analytic Staff. Who looks at the data? Who tries to figure out whether performance is improving or not? Who tries to figure out what new approaches should be considered? The managers of the various subunits need to do this. But the leadership team of the jurisdiction or agency needs a few people dedicated to this too. And these people can't additionally have a dozen other higher-priority tasks. For the PerformanceStat strategy to produce meaningful results, it needs a few people to be solely dedicated to analysis.
Error #4: No Follow-Up. What is the relationship between the issues discussed at this subunit's previous meeting and those examined at its meeting today? Did today's meeting build on the problems identified, solutions analyzed and commitments made at the previous meeting? Or are we starting anew? If the PerformanceStat approach is to produce real improvements in results, it has to focus on the key results that need improvement. And it has to focus on them meeting, after meeting, after meeting.
Error #5: No Balance between Brutal and Bland. Both the New York Police Department's CompStat and Baltimore's CitiStat programs established a reputation for being very tough on poor performers. In reaction, other agencies and jurisdictions have consciously tried to make their meetings as harmonious as possible. Still, the leadership team can't let subunits off the hook when they offer bland assertions of wonderful progress. Conversely, they can't also rely purely on brutal critiques. To truly improve any subunit's performance, the leadership team needs to both pressure its managers and help them.
Yet in an overreaction to the NYPD's and Baltimore's reputations, some jurisdictions and agencies have designed meetings that are little more than show-and-tell. The subunit's manager presents yet another glowing picture of the unit's latest accomplishments. Unfortunately, if the leadership team has failed to specify what it is trying to accomplish, if it has failed to designate someone to run every meeting, if it has failed to create its own analytic staff, and if it has failed to conduct any follow-up since the previous meeting, it is unable to do much more than applaud this delightful show.
PerformanceStat isn't a model or a system. It can't simply be copied. Consequently, it cannot be airlifted from one organization into another. PerformanceStat is a leadership and management strategy that can be employed to produce real results in a variety of government jurisdictions and public agencies. However, success requires a complex, cause-and-effect appreciation of how this performance strategy will work in a particular environment.
Reader Responses:
PublicWorksStatSergio Panunzio, superintendent of public works for Union Township, New Jersey, validates Robert D. Behn's argument on Stat performance and shows how it can be applied to a Department of Public Works. Mr. Panunzio notes that the performance tool has helped him consistently reduce overtime and reduce unnecessary emergency responses. For more information, contact him at SPanunzio@uniontownship.com.
Posted Jan. 11, 2008
Focusing on Core Measures
Over the past 12 years, the Dallas County Tax Office has implemented performance-incentive programs in two different departments that have produced very successful results: highest productivity per staff person in the state; significantly increased customer compliments/reduced complaints; reduction of total staff; significant improvement in response times (reduced customer wait time in lines, faster telephone response time, faster response to mailed requests); overall improved productivity per staffperson of 12 percent; two state-level recognitions and one national recognition for quality management; in 2007 the only government agency in the state of Texas to be recognized by the Texas Baldrige Award (congrats to Coral Springs, Florida, for being the first government agency ever to earn a national Baldrige Award).
In short, our program has been very successful. I believe that it has been successful because 12 years ago, we created, fine-tuned and tweaked it based on three core principles: (1) we would focus on only three or four core measures (based on the Southwest Airlines model in which they have one core measure, profit per seat, and all other measures, analyses and decisions in the entire organization can ripple out from that one core source that serves as an indicator of the health of absolutely every other aspect of the organization); (2) we would make sure that the three or four measures were easy to gather and monitor (to this day department managers can create their monthly performance report in an hour and a half per month); and (3) we created a performance reward program, not a performance punishment program (solving Dr. Behn's observed dilemma of brutal vs. bland).
From the beginning, the program was designed by the staff and managers as a tool to create challenging, but obtainable, performance standards, then provide monthly bonus pay to each employee who exceeded the performance standards during that month. This creates buy-in and excitement.
I have presented these concepts at several national conferences in recent years (most recently just two weeks ago in Las Vegas) and always receive extreme interest from other agencies. I have been a student of the performance measurement movement since it began, I suppose, in Sunnyvale 25 years ago, moved to Indianapolis with Mayor Goldsmith, then has grown somewhat since then. It amazes me that we are now 25 years into this movement and it has not progressed any more than it has. Of course, the two core obstructions have been the fact that too often measures have been presented as a "gotcha" punishment and government's fear of accountability. That is why I have believed that the key to greater success is to present measures in a positive light, as a way for the department head to report successes at improving the agency, and a way for the staff to be recognized for their personal efforts and contributions to the organization.
Such an approach has certainly created significant improvements in my organization. There is absolutely no reason that measures cannot begin being presented as an opportunity and that a quality group of "core" measures cannot be developed (for fire departments, street departments, etc.) for use and adaptation by government agencies. These should be the two missions of the performance-measurement movement for the next 10 years.
David Childs, Ph.D.
Dallas County Tax Assessor
Dallas, Texas
Posted December 21, 2007