Final metric development
After matching MLC indicators to Operational Definition standards, the preliminary Operational Definition indicators were grouped and consolidated. Recognizing that performance can occur at several levels, the Handler et al 8 systems model of public health work was used to categorize each indicator as having a capacity, process, or output orientation.8 Capacity, or inputs, included measures such as human resources, fiscal and physical resources, information resources, and system organizational resources necessary to carry out the core functions of public health. Process indicators were described as steps in a program logically required for the program to be successful. Finally, outputs were health programs and services intended to prevent death, disease, and disability, and to promote quality of life. These categories comprise the public health infrastructure that should exist within each health department, suggesting that to fully describe the agency's performance, indicators needed to be available for each category.
Once they were categorized, an extensive review of the quality, diversity, and quantity of the indicators began. Duplication and redundant indicators were revised or eliminated. For some Operational Definition standards, few MLC site indicators were available. When no MLC state and local indicators were available to match the Operational Definition standard, NPHPS was used as a placeholder. The indicator chosen was either an actual NPHPS indicator or one that the development team created from practice-based experience, and the final set of indicators was coded to document their origin (1 = exact measure from an MLC state; 2 = measure had the same meaning as a state measure but was slightly reworded; or 3 = developed from practice experience or NPHPS was used).
Development of illustrative evidence
A final step in the indicator development process was to identify how it might be determined that a standard and its associated indicator were met. Evidence used by existing performance improvement and certification programs, along with that from the MLC sites, was reviewed to provide a list of proposed evidence that matched the final set of Operational Definition metrics. One piece of evidence could apply to multiple indicators, so evidence was proposed for all indicators associated with a given Operational Definition standard rather than attempting to create a one-to-one match between an indicator and an evidence. This had the effect of reducing the evidence that would be needed to demonstrate that a standard and its indicators had been met, thereby reducing the effort required to apply the Operational Definition.
Validation and final product
A modified Delphi process was used to validate the Operational Definition indicators and evidence.9 Thirty-six local health department officials reviewed and provided feedback through both a survey and focus group phone calls on the validity of the developed indicators and the ability to demonstrate corresponding evidence.
The final set of Operational Definition Prototype Metrics includes 250 indicators that were developed over a 10-month process and can be found on NACCHO's Web site: http://www.naccho.org/topics/infrastructure/operationaldefinition.cfm. Figure 3 presents a sample page from the indicator document. The source coding for each indicator remains at the top of the first page. The left-hand column includes the Operational Definition standard along with the “focus statement” developed to help translate the NPHPS and MLC indicators to the Operational Definition. The middle column represents the actual indicators, grouped by Capacity, Process, and Output, and coded for their original source. Finally, the far right-hand column presents the illustrative evidence.
Validity of the Tool
How well Operational Definition indicators actually capture local public health agency activities and how easily the tool can be used are important considerations in judging diffusion in the field. Researchers generally employ several related concepts in evaluating the validity of a set of measures to capture phenomena they are trying to understand and predict.10 The research concepts of internal validity and external validity are relevant to this discussion. Internal validity examines the degree to which a measure captures the elements it is meant to measure, in essence asking the question, “Are you measuring what you think you are measuring?” The external validity perspective looks at whether the metrics as a whole are sufficiently broad to have general application in the field. For practice purposes, these concepts are important for knowing how well the indicators measure what is actually going on in local health departments today (ie, internal validity) and how well the Operational Definition Prototype Metrics represent a comprehensive picture of a local health department (ie, external validity).
Operational Definition Prototype Metrics' validity can, in part, be demonstrated by revisiting how the metrics were developed. The Delphi review process provided an assessment of internal validity as the Delphi participants served as an expert review panel, which rated each indicator on a 5-level Likert-type scale. The Delphi Panel rated less than 5 percent of all indicators as being poor. These indicators were revised in the second round of Delphi reviews, suggesting that the current set of indicators represent a good measurement of each local health department function considered.
The extent to which the Metrics represent a robust portrayal of a typical local health department is less easy to demonstrate. Two factors are useful in assessing generalizability: indicator quantity and goodness of fit. Interpreting quantity is straightforward—a larger number of indicators will better capture the practice diversity of a given standard. For at least one half of the Operational Definition indicators, there was a large variety of MLC practice-based indicators. However, for some Operational Definition standards, there were too few MLC indicators available for use, necessitating the use of NPHPS or expert judgment. For these standards, generalizability must be determined through future application. Goodness of fit refers to how closely a set of indicators clusters around a theme. While it was noted that the MLC indicators were diverse in wording, their content tended to cluster certainly around Operational Definition standards. In summary, we will not know whether the entire set presents a truly comprehensive picture of a local health department until the metrics tool is widely field tested, but this preliminary assessment is promising.
Applications of the Operational Definition for Local Public Health Practice
This initial version of the Operational Definition indicators is the first of its kind to define and describe local health department activities from a performance perspective. As a first attempt, the Operational Definition indicators should be considered in the formative stage of development. While a systematic development process has grounded the indicators in both national public health conceptual frameworks and public health practice, this process has derived only a pilot set of indicators that appear to have a good potential of being applied in practice. It is the practice application at the local level that will convert this promising pilot into a well-accepted tool for local public health agencies.
Indicators developed for Operational Definition Standards have a great potential for transforming public health practice. Exactly how that will unfold is still speculative, but several interrelated possibilities for the application of the Operational Definition seem apparent.
Defining and Marketing Local Public Health
With over half of all indicators in the MLC states matching Operational Definition standards, and a Delphi rating of more than 95 percent for the Operational Definition indicators, these findings suggest that the Operational Definition has already made significant progress in providing the foundation for a uniform performance-based definition for local public health departments. Individual health departments will always have unique characteristics reflecting population, geography, politics, history, and other environmental factors, but at a fundamental level, their functions should generally have some uniformity. Developing a common language and functional description at the agency level is vital to the future of local public health. The Operational Definition may play a key role in helping bridge surface differences and uncover those basic common characteristics. Even if health departments never align exactly with the Operational Definition, the tool becomes a touchstone to help practice initiatives move in a common direction.
Beyond communicating within the public health field, the use of the Operational Definition should enhance communication with the public and policy makers. Articulating local health department functions in a performance-based and standard way should increase awareness about the value of local public health and how local public health should work when performing well. Increased awareness can also help public health agencies build constituencies and partnerships for programs and services. This is particularly important in an era when public health still is not well understood and appreciated, or is defined by a narrow subset of functions (eg, emergency preparedness) that happen to be high on policy makers' and the media's list of concerns. To advance this understanding, NACCHO has developed a series of fact sheets about local health departments, based on the Operational Definition. Three different versions were targeted to elected officials, the general public, and health department employees (NACCHO Web site).
Basis for accreditation
Having a uniform set of performance metrics should help accelerate efforts toward a national accreditation program. The Robert Wood Johnson Foundation's Exploring Accreditation Initiative generated a set of recommendations for establishing a voluntary national accreditation program for state and local health departments. An essential ingredient in any accreditation scheme is the availability of standards and indicators by which to assess performance. The Operational Definition in its entirety is well positioned to serve as the initial set of standards, indicators, and evidence to be pilot-tested for use in a voluntary accreditation program.
Performance measurement, management, and quality improvement
An expected use of the Operational Definition involves performance measurement, management development, and quality improvement. Much as the NPHPS have been used to set standards for the assessment of local public health systems, the Operational Definition sets standards and indicators to help assess the performance of local public health departments. But the Operational Definition indicators do not yet approach the comprehensiveness of NPHPS for measuring agency performance, and there are still significant gaps. Analysis of the currently available indicators from MLC states suggests that while some states have a performance improvement process in place, even these state indicators target specific essential services and not the performance of the health department as an organization.
While not yet complete, the Operational Definition may well provide an excellent starting point for performance measurement. This purpose could be expanded through a national program to collect the Operational Definition assessment data, which could be developed in conjunction with an accreditation system. Incorporating performance measurement and quality improvement into the public health accreditation system will add a more tangible value to accreditation, which may counter potential opposition based on claims that accreditation by itself adds little of value to public health department service delivery capacity.
Public health agency strategic management
The Operational Definition comes at a time when strategic management is becoming increasingly important to effective public health practice. Strategic management tools, such as MAPP, business process analysis, PACEEH, assessment of clinic service delivery options, and Project Ready, all require a more detailed understanding of local public health agency purpose and performance. The Operational Definition could be important in furthering strategic management efforts.* Most approaches to strategic management include a detailed analysis of an organization's strengths and weaknesses based on generally accepted performance indicators. In sectors other than public health, these indicators are available and can be used to benchmark performance relative to similar organizations in an industry or a sector. Even when a point of comparison is not available, having a set of generally accepted performance indicators provides the diagnostic basis to understand strengths and weaknesses and pursue realistic strategies. Once these strategies are in place, performance indicators then provide the basis for strategic management control, as they can be used to gauge progress and document success. Having such performance milestones is particularly important for public health agencies, whose success often hinges on distant and illusive goals aimed at improving community health.
Even in their preliminary form, the indicators are an improvement over what is usually used to assess local public health agency performance. Other tools are starting to become available that, coupled with the Operational Definition, provide a capacity to analyze local health department functions in a detailed and uniform manner similar to what has been done in other sectors. One example is the application of business process analysis and design to local health departments.11 As in other sectors that have adopted strategic management methods, sophistication in application and broad acceptance grow with use.
The Operational Definition can be viewed most simply as the next step in addressing the lack of understanding and appreciation of public health observed in the 1988 IOM report. It comes at a time of great opportunity and challenge. A clearer understanding of the role of local public health departments based on performance can lead to a profound new vision that can inspire a transformation of local public health.
1. Institute of Medicine. The Future of Public Health
. Washington, DC: National Academy Press; 1988.
2. Turnock BJ, Handler AS. From measuring to improving public health practice. Annu Rev Public Health
3. Harrell JA, Baker EL. The essential services of public health. Leadersh Public Health.
4. Tilson H, Berkowitz B. The public health enterprise: examining of twenty-first-century policy challenges. Health Aff
5. National Association of County and City Health Officials. Operational Definition of a Functional Local Health Department
. Washington, DC: National Association of County and City Health Officials; 2005.
7. Morse JM. The significance of saturation. Qual Health Res.
8. Handler A, Issel M, Turnock BA. Conceptual framework to measure performance of the public health system. Am J Public Health
9. Delbecq AL, Van den Ven A, Gustafson DH. Group Techniques for Program Planning, a Guide to Nominal Group and Delphi Processes
. Glenview, IL: Scott Foresman; 1975.
10. Cook TD, Campbell DT. Quasi-experimentation Design & Analysis Issues for Field Settings
. Boston: Houghton Mifflin; 1979.
11. Public Health Informatics Institute. Taking Care of Business: A Collaboration to Define Local Health Department Business Processes
. Decatur, GA: Public Health Informatics Institute; 2006.
*This workgroup included the Association of State and Territorial Health Officials, the National Association of County & City Health Officials (NACCHO), the Institute of Medicine (National Academy of Sciences), the Association of Schools of Public Health, the Public Health Foundation, the National Association of State Alcohol & Drug Abuse Directors, the National Association of State Mental Health Program Directors, and the US Public Health Service.3
*These tools available through NACCHO include the following:
- Mobilizing Action Through Planning and Partnerships
- Taking Care of Business (business process analysis)
- Making Strategic Decisions About Service Delivery: An Action Tool for Assessment and Transitioning
- Protocol for Assessing Community Excellence in Environmental Health
- Project Public Health Ready (emergency preparedness planning)
Keywords:© 2007 Lippincott Williams & Wilkins, Inc.
accreditation; operational definition; performance indicators; performance management; quality improvement; strategic management