This is at least in part a self-inflicted wound, because the way that academic research is measured and incentivized results in squandering of intellectual resources by encouraging academics to write only for each other, use techniques that are not accessible, and publish in outlets and in forms where there is little chance of impact on the actual practice of public administration or policymaking.
To start, it is important to understand that, for academics on the tenure track, "publish or perish" is only slightly hyperbolic. For these faculty members an evaluation, usually in the sixth year after appointment, results in either termination or, in effect, lifetime employment. Typically, success means publishing in the "best" peer-reviewed journals, having one's articles cited by other researchers and recognition by one's more senior peers. As technology has allowed quantitative comparisons, the pursuit of "objective" standards of quality — a citation-based numeric ranking system of journals, called an "impact factor" — has been too tempting to resist. These measures are often used by universities to establish something of a common metric that permits comparisons of scholars across a great many varied disciplines.
In a field like public administration or public policy, however, equating citations with "impact" seems extremely narrow and short-sighted, since it does not attempt to measure whether the research has even entered the policy or management arena. Consider that the top five public administration journals in 2015, ranked by impact factor, were The Journal of Public Administration Research and Theory (impact factor: 3.893), Governance (3.424), Regulation and Governance (2.724), Public Administration Review (2.626) and The Journal of Policy Analysis and Management (2.329). I would submit that the vast majority of practitioners have never heard of the top three of these journals and have rarely read an article in any of the five.
Moreover, the need to publish articles in peer-reviewed journals that will be cited by other academics has led to a tendency to do research that is based on large data sets, analyzed using increasingly sophisticated quantitative methods, with results that are not intended to be accessible to the broader practitioner community. Careful statistical analysis is appropriate for addressing many questions, but it now dominates to the exclusion of what are often more appropriate methods, such as interviews and participant observation.
In short, academics increasingly do research that does not require them to either venture more than a few feet from their computer screens or talk to other human beings, thus compromising their ability to grasp the nuances of many issues. Practitioners know that these nuances are often critical to understanding the complexity of problems. Ph.D. students, who are often advised to do quantitative dissertations, become the assistant professors who soon learn that such research is more likely to get published in the kinds of journals that will yield the kind of citations — the measured impact — that will be necessary for successful completion of the tenure process.
Further, even if the research might yield important lessons for practice, the people who might most benefit from the research findings normally never see them. Rather, they are reported in an echo chamber of academics writing for other academics. The kinds of products that might result in a broader impact — op-eds, blog posts, shorter pieces in trade publications, or testimony before state or local governing bodies — not only do not count for tenure but are viewed as detracting from the real work: publishing for other academics.
For this reason, some pushback has come against the way that academic incentives are structured. Some have advocated moving toward a broader measure of impact, called "altmetrics," that would permit the introduction of ideas into the policy process to be considered as a part of scholarly impact. Similarly, my University of Maryland colleague KerryAnn O'Meara, an expert on higher education, argues for replacing a narrow, calcified view of impact with a more flexible one that permits arguments about impact based on different types of evidence.
The academic disciplines of public administration and public policy developed because of a need for problem-focused, interdisciplinary fields whose raison d'etre was the development and dissemination of knowledge that could be applied to improve both government and society. If we truly want our research to matter, we as academics must be willing to embrace measures that are focused on actual policy and practical management concerns rather than continuing to reward ourselves for talking to each other.