Tremain, Beverly PhD; Davis, Mary DrPH; Joly, Brenda PhD; Edgar, Mark PhD; Kushion, Mary L. MSA; Schmidt, Rita MPH
Evaluation is now typically required by most organizations that fund public health programs and is one of the 10 essential public health services.1–3 Patton defines evaluation as a “systematic collection of information about program characteristics, activities, and outcomes of programs to make judgements about the program, improve program effectiveness, and/or inform decisions about future programming.”4(p23) Evaluation activities can include or support program planning, performance measurement, budgeting, and examination of the process and impact outcomes of a program.3 Each of the five Multi-state Learning Collaborative (MLC) states conducts some form of evaluation or program reviews of their accreditation or performance assessment systems. Understanding the variety of approaches to evaluating these systems provides insights to the broader public health community, particularly in the light of a potential national accreditation model.5
In this article, we present the evaluation approaches of the five MLC states as well as observations and recommendations from those responsible for conducting these evaluations. This information was gathered through MLC conference calls and meetings and informal discussions among the authors.
Evaluation Approaches of Five States
The following describes how each state has approached its evaluation.
To enable local public health agencies (LPHAs) to be more responsive to local community health needs, Illinois planners developed a new LPHA certification process, anchored in the Illinois Administrative Code, first implemented in 1993. This process required all LPHAs to apply every 5 years to the Illinois Department of Public Health (IDPH) for certification. As part of the application, LPHAs were to complete the Illinois Project for Local Assessment of Needs (IPLAN), using a modified Assessment Protocol for Excellence in Public Health (APEX/PH)6 process for organizational and community assessments and plans. LPHAs were surveyed in 1992, 1994, and 1999 to measure their performance in addressing the 10 organizational public health practices (which are similar to but predate the 10 essential public health services) associated with the three core functions of assessment, policy development, and assurance.
Purpose of evaluation
The intent of evaluating the Illinois certification process was to determine whether utilization of APEX/PH and its Illinois adaptation positively influenced practice performance. The IDPH also ranked the impact of other potential influences, such as IPLAN, the Institute of Medicine's The Future of Public Health report7 and Healthy People 2000,8 the Illinois Public Health Leadership Institute, revised certification rules, and resource reductions in IDPH's Local Health Department state-local liaison unit. The results from these evaluation efforts have been utilized as part of numerous quality improvement efforts facilitated by the IDPH Needs Assessment Advisory Committee.
Michigan's mandatory public health accreditation system is one of the nation's most mature systems, with initial implementation in 1997. The mission of the Michigan accreditation program is to ensure and enhance the quality of local public health by identifying and promoting the implementation of public health standards, and evaluating and accrediting LPHAs on the basis of their ability to meet them.
Purpose of evaluation
Because of concerns expressed by LPHAs, Michigan paused the on-site reviews for 1 year, starting in February 2003. The Accreditation Quality Improvement Process Workgroup (AQIP) was formed to evaluate and recommend improvements to the process then in place. The workgroup's primary goals were to ensure that improvement activities engaged all key stakeholders, identified opportunities for process improvement, determined which improvement opportunities will have the most positive impact on stakeholder satisfaction, developed recommendations based on priorities, and developed recommendations for ongoing process improvement. To collect sound data, AQIP designed a survey that examined every aspect of the accreditation process. Local health department administrative staff and state review program managers participated in the survey. The AQIP Workgroup became a permanent committee of the accreditation commission in January 2004, tasked with ensuring that the recommendations contained in the report were implemented. In addition, AQIP was asked to create an ongoing continuous quality improvement process.
Missouri Institute for Community Health (MICH) was founded with a mission to facilitate and promote excellence in community systems for improved health and quality of life. The accreditation program was viewed as a catalyst for system change that could build increased public health system capacity.
Purpose of evaluation
The purpose of the three-step evaluation is to examine the process, impact, and outcomes of the accreditation program on LPHAs. The process evaluation assesses key components of the application process, such as using the accreditation Web page, clarity and use of the accreditation manual, quality and value of technical assistance, communications between the applicant and MICH, validity of standards, and completion and submission of the Self-Assessment instrument. In addition, it assesses the interaction with the MICH Onsite Review Team and any changes documented by the LPHA after self-assessment. An impact and outcomes evaluation is conducted to assess change 1 year after accreditation and what processes led to that change. The evaluation intent is to reveal the factors supporting change and barriers to change, methods for LPHA improvement, and effective practices.
In 2005, the North Carolina legislature enacted a mandatory program requiring all 85 local health departments to be accredited by 2014. Accreditation is awarded by the North Carolina Local Health Department Accreditation Board.
Purpose of evaluation
The North Carolina Institute of Public Health (NCIPH) Evaluation Services conducted comprehensive evaluations of the system, using Patton's Utilization-Focused evaluation as a model.4 Ongoing system evaluation includes (1) monitoring system performance for quality improvement, (2) examining accreditation costs and benefits, and (3) identifying improvements in local health department capacity and performance. Agency personnel, site visitors, North Carolina Division of Public Health staff, and NCIPH staff participate in the evaluation through surveys or interviews. Future evaluations will be conducted during each accreditation cycle, which includes 10 health departments per state fiscal year. Currently, evaluation results are provided to stakeholders and partners to inform system improvements and communicate system value.
The groundwork for the current Washington Public Health Standards Program began in 1993 with the passage of a bill in the state legislature requiring a public health services improvement plan and the development of standards and performance measures for public health agencies. Beginning in 2002, the state and local health departments have demonstrated performance with the adopted standards and measures every 3 years.
Purpose of evaluation
Between the baseline measurement and the assessment in 2005, the agencies were surveyed to evaluate the effectiveness of the process in providing useful information and to identify any necessary changes that should be made. Questions included the agencies' use of the results from previous assessments, and their needs in preparing for the current assessment. The approach was based on the Plan, Do, Check, Act, or Shewhart Cycle.9 All Local Health Jurisdictions and the Washington Department of Health programs were asked to participate in the survey evaluation. The information was used to prepare for the next round of assessments and informed Department of Health programs and Local Health Jurisdictions as to what was actually done with the assessment results.
Major elements of the MLC state accreditation and performance assessment systems and evaluation processes are presented in Table 1. Although the states report using different evaluation models or have different foci for the evaluation, all states conduct quality or performance evaluation of the accreditation or performance assessment system to ensure that the system is performing as intended and meeting the needs of users. Two states conduct internal evaluations, two conduct internal and external evaluations, and one conducts only external evaluations. The five states use several data-collection methods, primarily interviews and surveys. Regarding frequency of system evaluation, three states have prescribed evaluation cycles occurring at predetermined intervals, and two conduct evaluations as needed. Evaluation stakeholders include state and local health department staff, other state agencies, reviewers and site visitors, public health professionals, and state and local elected officials. All of the states report evaluation results to various stakeholders and use evaluation results to improve the system performance.
Observations and Recommendations
The following recommendations are gleaned from an analysis of the five states' experiences in evaluating their performance assessment/accreditation programs.
Recommendation 1: Use evaluation to help inform, improve, and sustain accreditation efforts
In the MLC states, accreditation and performance assessment system evaluation has two purposes: system quality monitoring (evaluation as a tool to improve the system) and evaluation as a strategy to communicate the benefits of the system.
The evaluation informs the system itself and provides checks and balances on the system performance. Performance improvement evaluation allows the accreditation system to conduct continuous quality improvement and provides a critical appraisal element to the process so all stakeholders are engaged in the quality of public health performance. Changes in the system in response to evaluation results should be intentional, so as not to overreact and have the program be in a constant state of flux and instability.
Evaluations occur in a political environment.10 Evaluators must weigh the concerns of multiple stakeholders, consider conflicts of interest, and determine the true target of the evaluation.4 Evaluation of accreditation or performance assessment systems can be critical for garnering political support for building accreditation as a concept and showing the “value added” of accreditation. In North Carolina, for example, besides being used to gauge how well the system worked and identify areas for system improvement, the two pilot accreditation processes included strategies to communicate evaluation results to convey the successes and challenges of implementing the system and tout participating health department support for accreditation.
Recommendation 2: Use evaluation methods and strategies appropriate to the system
Authors suggest focusing on careful planning and structuring of the evaluation and identifying appropriate measures for the system to be evaluated. In other words, use a systematic evaluation planning approach, such as the CDC Evaluation Framework,2 rather than a specific evaluation method. As noted above, a key component of evaluating these systems is identifying the various system stakeholders and prioritizing and balancing the evaluation interests of these stakeholders. As seen in Table 1, all of the states identified multiple-system stakeholders.
Several MLC states focus system evaluation on performance improvement or process evaluation; MLC states found it less feasible to use evaluation methods to measure the impact of the system on health outcomes. In some cases, impact has been examined within individual health departments that have gone through accreditation. But evaluating impacts of an accreditation or performance assessment system is a nascent research area.11
The MLC evaluations use combinations of data-collection methods, qualitative and quantitative, to meet evaluation goals. Methods include interviews, focus groups, surveys, observations, and document reviews. Multiple methods of data collection allow for specific evaluation questions to be examined from several perspectives.
The MLC states use either internal or external evaluation or, in some cases, use both. Regardless of which approach is used, conflicts of interest should be minimized and transparency of the evaluation process and results should be maximized. External evaluations will be more credible if the external evaluators have experience with and understanding of public health, accreditation, and the mission of the system. In high-profile public health programs, using both external and internal evaluation may be appropriate.12
Significant planning is needed to structure the evaluation approach and create appropriate data-collection instruments. To evaluate, the tools must be appropriate and valid, which takes time to develop. Resources such as time and qualified evaluation staff will facilitate the planning and implementation of a credible system evaluation.
Recommendation 3: Use evaluation to inform public health practice
Performance assessment and accreditation systems are designed to improve public health practice. These systems set a capacity or performance baseline for the health departments and identify areas for capacity and performance improvement. System evaluations can gather these recommended policy and process changes and communicate these more widely to improve public health practice. These systems provide a way for information from a variety of sources to be synthesized and communicated to inform the entire public health community about policies and programs that work.
Recommendation 4: Use evaluation to understand complicated systems
Several factors complicate how performance assessment or accreditation system evaluations are planned, implemented, and reported. First, identifying system outcomes poses some difficulty. How does public health identify the outcome of its work? Can it be measured? How long does it take to show a change? What other variables must be taken into account? Second, public health performance assessment and accreditation systems are political processes. The local health departments have dynamic relationships with the state, where performance determines funding. The evaluation of these systems must recognize these processes and competing stakeholder interests.
MLC funding for Illinois, Michigan, Missouri, North Carolina, and Washington enabled a focused analysis of how these states are implementing accreditation and performance assessment programs, challenges faced in establishing them, and lessons learned in the process. Accreditation evaluation is complex and will only increase in complexity as the impact of an accreditation system is examined. Appropriate and adequate time, resources, and evaluation expertise will ensure the success of assessment and accreditation systems. A guiding evaluation principle, which these five states have used, is that the evaluation questions identify the appropriate evaluation methods, underscoring the need for a variety of evaluation approaches and data-collection strategies.2,3 Tailoring strategies to evaluation questions and stakeholder priorities can minimize the complexity of the accreditation and performance assessment system evaluations.
The five states presented here have focused on process evaluation. Process evaluation tells us why and how a program worked,13 and includes quality assurance or program improvement components to ensure that the system is operating as planned and meeting the needs of participants.14 Before we proceed with a national accreditation model, we must understand how the state-level models work and what implementation lessons from them can be translated to a national program.
Debate continues regarding the appropriate evaluation emphasis of accreditation and performance assessment systems and public health programs in general. The Centers for Disease Control and Prevention, in A Guide for Evaluation of Public Health Programs, explains:
Increasingly, public health programs address large problems, the solution to which must engage large numbers of community members and organizations in a vast coalition. More often than not, public health problems—which in the last century might have been solved with a vaccine or change in sanitary systems—involve significant and difficult changes in attitudes and risk/protective behavior of consumers and/or providers.3
This complexity of causes and effects makes it difficult to answer the important question, what difference does an accreditation or performance assessment program make? At the recent MLC 1 conference in Chicago, Illinois, in September 2006, two statements demonstrated the proverbial “push-pull” in public health:
“unless you can demonstrate fidelity of the program and process quality, you have no business talking about outcomes…”and “unless our process leads to an improvement in the public's health, then we are missing the boat.”
The RAND report Getting to Outcomes: Promoting Accountability Through Methods and Tools for Planning, Implementation and Evaluation emphasizes that funders are increasingly looking for outcome data to demonstrate the success of programs.15 Yet, an important tenet of program evaluation is using the “hierarchy of evaluation,” in which outcomes can be assessed only once the program theory has been established and appropriate program implementation has been documented.16
The extent to which accreditation of public health departments leads to improved health and quality of life is influenced by a number of contextual factors within the community. Nonetheless, one common theme from these five states is the intentional use of evaluation strategies to inform both the performance assessment/accreditation systems and public health practice. Evaluation has guided the MLC states' policies and decision making about the process and how to improve it. These evaluation efforts are now beginning to include measures of the outcome or impact that a performance assessment or accreditation system can have on the public health system. As these programs mature, they will tell us more about how accreditation and performance assessment can improve public health practice and, eventually, the health of the public.
1. Davis MV. Teaching practical public health evaluation methods. Am J Eval. 2006;27(2):247–256.
2. Chelimsky E. The coming transformations in evaluation. In: Chelimsky E, Shadish W, eds. Evaluation for the 21st Century. Thousand Oaks, CA: Sage Publications; 1997:1–26.
3. US Department of Health and Human Services, Centers for Disease Control and Prevention. Introduction to Program Evaluation for Public Health Programs: A Self-Study Guide. Atlanta, GA: Centers for Disease Control and Prevention; 2005.
4. Patton MQ. Utilization Focused Evaluation: The New Century Text. 3rd ed. Thousand Oaks, CA: Sage Publications; 1997.
5. Final recommendations for a voluntary national accreditation program for state and local health departments. Exploring Accreditation Project Web site. www.exploringaccreditation.org
. Accessed October 2, 2006.
6. Turnock B, Handler A, Hall W, Lenihan P, Vaughn E. Capacity-building influences on Illinois local health departments. J Public Health Manag Pract. 1995;1(3):50–58.
7. Institute of Medicine. The Future of Public Health. Washington, DC: National Academy Press; 1988.
8. US Department of Health and Human Services. Healthy People 2000: National Health Promotion and Disease Prevention Objectives. Washington, DC: Government Printing Office.
9. Shewart W, Deming W. Statistical Method From the Viewpoint of Quality Control. Mineola, NY: Dover Publications; 1986.
10. Chelimsky E. The political environment of evaluation and what it means for the development of the field. In: Chelimsky E, Shadish W, eds. Evaluation for the 21st Century. Thousand Oaks, CA: Sage Publications; 1997:53–71.
11. Joly B, Polyak G, Davis MV, et al. Linking accreditation and public health outcomes: a logic model approach. J Public Health Manag Pract. 2007;13(4):349–356.
12. Umble K, Orton S, Rosen B, Ottsoson J. Evaluating the impact of the management academy for public health: developing entrepreneurial managers and organizations. J Public Health Manag Pract. 2006;12(5):436–445.
13. Linnan L, Steckler A. Process evaluation for public health interventions and research: an overview. In: Process Evaluation for Public Health Interventions and Research. San Francisco, CA: Jossey-Bass; 2002:1–23.
14. Green LW, Kreuter MW. Health Promotion Planning: An Educational and Ecological Approach. Mountain View, CA: Mayfield Publishing Company; 1999.
15. RAND Corporation. Getting to Outcomes 2004: Promoting Accountability Through Methods and Tools for Planning, Implementation, and Evaluation. Santa Monica, CA: RAND Corporation; 2004.
16. Rossi P, Lipsey M, Freeman H. Evaluation: A Systematic Approach. 7th ed. Thousand Oaks, CA: Sage Publications; 2004.
© 2007 Lippincott Williams & Wilkins, Inc.