Butler, James BBA; Tews, Debra MA; Raevsky, Cathy BA; Canavese, Janet BS; Wojciehowski, Kathleen JD; Michalak, Craig MBA; Thomas, Monecia MHA; Brewster, Joan MPA; Mason, Marlene MBA; Schmidt, Rita MPH
In 1988, the Committee for the Study of the Future of Public Health identified the core functions of assessment, policy development, and assurance as key roles of the public health governmental sector.1 Illinois, Michigan, Missouri, North Carolina, and Washington have developed tools in their individual states to measure how state and/or local government entities carry out these roles. Their selection as participants in the Multistate Learning Collaborative (MLC) on Performance and Capacity Assessment or Accreditation of Public Health Departments has given each state the opportunity to explain to one another and to the nation how and why they developed and implemented their measurement methods and tools.
The Robert Wood Johnson Foundation, through the National Network of Public Health Institutes and the Public Health Leadership Society, funded the MLC. Grant objectives included enhancing the accreditation/assessment activities already under way in the grantee states, promoting learning among the states in the collaborative, disseminating information to the larger public health practice community, and serving as a learning laboratory for the National Exploring Accreditation Project.2,3
The recommended model for accreditation activities, as articulated by the Exploring Accreditation Project, includes a readiness review and self-assessment by health agencies; staff review, site visits, and a recommendation report by an accreditation entity; and an appeals process by which health agencies can address the final determination of accreditation status. In a recent study, components from 22 accreditation programs in other service industries, such as healthcare, education, social service, and public service, include internal self-assessments, on-site visits, submission of standardized performance measures, and summary assessments and scorings.4 In the same study, findings show that most of the programs analyzed rely on assessment processes that involve both a self-study and a site visit to determine compliance with established standards of practice.4 Four of the five MLC states rely upon a self-assessment for internal use and preparation, as well as a validating site visit.3 In this article, we describe the experience of these four MLC states—Michigan, Missouri, North Carolina, and Washington—in developing and conducting the on-site review component of assessment activities.
Site visits are part of many accreditation programs including state and local public health assessment programs.3,4 They are recommended by the Exploring Accreditation Project because it is assumed that firsthand examination of facilities and conversations with public health leaders and practitioners give reviewers a better understanding of organizational capacity than merely reading about policies and activities or reviewing a self-study produced by the organization. However, accreditation site visits are labor-intensive for all entities involved. They are potentially more expensive for the accrediting entity, and thus may be considered less attractive when planning such programs, particularly in large states with large numbers of local public health departments.
The purpose of this article is to examine and elucidate how states in the MLC have designed the site-visit component of their assessment programs, what they see as critical elements of the site-visit process, and what they recommend to other states that may be considering including a site-visit component as part of their assessment or accreditation programs. We also examine the implications for a national model for capacity assessment or accreditation.
State Accreditation/Assessment Programs
The information for this article was gathered from accreditation/assessment experts from each MLC state that uses on-site reviews, as well as from data sources such as the MLC members' Web site, maintained by the National Network of Public Health Institutes, and the individual Web sites of the MLC grantees.5–8 The experts provided information about their states' review processes and affirmed the up-to-date status of their Web sites. Per information received during July 2006 interviews with 10 MLC accreditation/assessment experts, they have been extensively involved in all aspects of their states' accreditation or assessment programs and individually have from 1 year to 14 years of accreditation or assessment experience (the mean is 6.1 years). Given that accreditation programs are still emerging in the field of public health,4 the mean of 6.1 years experience is noteworthy.
In this article, the terms “site-visit” and “on-site review” are used interchangeably and defined as a site-specific visit to assess, observe, interview, review, evaluate, and/or survey a local/state agency or program regarding its ability to meet a set of public health standards.
The following describes the elements of each state's accreditation program.
In Michigan, the Local Public Health Accreditation Program evaluates the ability of local health departments (LHDs) to meet standards that are based on Michigan's Public Health Code, state administrative rule or policy, and/or professionally accepted standards of practice. This mandatory program evaluates overall organizational capacity and individual program performance of the core functions of assessment, policy development, and assurance. The accreditation process includes LHD self-assessments and weeklong on-site reviews every 3 years, evaluating LHD organizational capacity and the performance of up to 12 specific programs.
Michigan's Accreditation Steering Committee, formed in 1996 and comprising public health professionals from government and academia, developed the state's local public health accreditation process and determined that requiring both self-assessments and site visits would enable LHDs to recognize success within the organization and implement valuable improvements where needed.9–11 The combination of self-assessments and site visits provides a more complete picture of LHD performance than reliance upon either a self-assessment or site visit alone. To date, Michigan has conducted on-site reviews of its 45 LHDs twice and is in the midst of the third cycle.
The Missouri voluntary accreditation program of Local Public Health Agencies is designed to measure accountability, provide state and local elected officials a public health capacity model, and encourage excellence in current public health practice. Standards measure agency performance, based on NACCHO's Operational Definition of a functional LHD,12 and are divided into eight domains: planning processes; governance/leadership; facilities/service delivery; financial and resource management; workforce; information technology; communications; and intergovernmental issues. The Missouri Institute for Community Health, a nonprofit corporation established in 2003, was formed to administer the program. Site visits are conducted over a 1- to 3-day period in 3-year cycles.
The focus of North Carolina's Local Health Department Accreditation (NCLHDA) is on the capacity of the LHD to perform the three core functions of assessment, assurance, and policy development, and the 10 essential services as detailed in the National Public Health Performance Standards Program.13 In 2005, legislation was passed and state rules were adopted making local public health accreditation mandatory. The program has three functional components: an agency self-assessment; a 3-day on-site review by a multidisciplinary team of peer volunteers to review documentation and evidence, inspect facilities, conduct interviews to evaluate compliance and verify that the benchmarks and activities are met; and the determination of accreditation status by the independent NCLHDA Board.
The Washington State Standards for Public Health evaluate LHDs and state-agency programs in the areas of community health assessment and promotion, communicable disease and environmental health protection, and assurance of health services and access. The standards are the same for all agencies, although measures are tailored to meet specific state and local roles. This voluntary program includes self-assessment and a site visit of up to 2 days, depending on the size of the site and the number of measures being assessed.
On-Site Review: Elements and Activities
The attributes depicted in Table 1 provide a snapshot of on-site review elements and activities by state. The following description amplifies and augments this information.
Michigan's extensive on-site review process allows participants to exchange key information, share best practices, ask questions, and seek clarification. Michigan conducts site visits every 3 years, sending 15 reviewers per weeklong visit. The state is in its third cycle, having conducted more than 100 site visits of its 45 LHDs. To strengthen its process, Michigan is developing a model that incorporates the NACCHO Operational Definition of a functional LHD into the organizational capacity review.12 The model will increase accountability of agencies' capacity to deliver the core public health functions and the essential public health services.14,15 To illustrate accountability, agencies will have to meet measurable standards and demonstrate improved performance. Recognizing LHDs' strong interest in high performance and continuous quality improvement,2,11 the Shewhart Cycle of Learning and Improvement—Plan, Do, Check, Act—will be used as an integral part of the model.16,17
Missouri's voluntary system has three accreditation levels: primary, advanced, and comprehensive. On the basis of its capacity, the LHD selects one of these levels when it begins the self-assessment process, which may last up to a year. Each agency does a process and impact evaluation, and the reviewers complete a process review with each visit. Site review teams are made up of three reviewers per team (there are 16 in all); reviewers typically hold advanced degrees and have held positions in local public health agencies. They receive an honorarium for their reviews. An independent evaluator compiles the results and submits a report to the Accreditation Council, which then revises processes as needed. Missouri uses an evaluation process to maintain the integrity of the site review, the reviewers, and the participating agencies.
The Missouri Institute for Community Health constantly refines its processes to eliminate duplication in its standards and the review process and be more customer-centered. The first 3-year cycle was completed summer 2006; standards were revised and streamlined for the second cycle.6
North Carolina's LHDs include rural and urban areas, large and small health departments, public health authorities, a community health alliance, and district health departments that represent multiple counties. North Carolina's mandatory standards-based Local Health Department Accreditation program focuses on a set of minimal standards that must be provided to ensure the protection of the health of the public, but does not limit the services or activities an LHD may provide to address specific local needs. While the benchmarks being applied are similar to the NACCHO Operational Definition of a functional LHD12 and are drawn from work done in other states, the activities are specific to practices in NCLHDs. The NCLHDA does not create a new accountability system; rather, it links basic standards to current state statutes and administrative code and the many Division of Public Health and Division of Environmental Health contractual and program monitoring requirements that are already in place.
The site review teams are made up of not fewer than four individuals, selected by the independent accreditation administrator from a pool of trained peer volunteers. The individuals receive a customized comprehensive training, and participate in conference calls and team meetings before the beginning of each site visit. They receive an honorarium for participating in the program.7
Since 2001, Washington State has conducted three cycles of assessment. Washington uses 23 standards to assess the performance of the public health system. These standards have been cross-walked with the Core Functions of Public Health, the Ten Essential Services, and the NACCHO Operational Definition, and are used for quality improvement and to augment program measures and outputs. This approach is predicated on the belief that standards and program measures are essential tools for improving outcomes.18
On-site reviews for each LHD and most areas of the state health department are carried out by independent contract consultants and staff peer reviewers. Training is conducted with the staff peer reviewers and all reviewers participate in interrater reliability testing, to assure that scoring is consistent. Before the site visit, the LHD completes a self-assessment. Site-visit activities include the review of written documentation and entrance and exit interviews. Each LHD receives a written report detailing its performance for each measure and recommendations for improvement.
The Washington program makes a special effort to make its site visits useful for gathering and sharing best practices systemwide. All on-site reviews are conducted during a 4- to 5-month period, allowing for a systematic assessment and snapshot of statewide performance. The system report identifies the specific results for each measure, comparisons between the state health department and LHDs, comparisons with previous years' results, recommendations about systemwide improvements, and improvements to the standards, measures, and processes. Documents that exemplify the work expected for each measure are collected and placed on a Web site, and agencies are encouraged to use these exemplars they work to improve their individual results.8
In each of the MLC states that uses on-site review, it is a central feature of the accreditation or performance assessment process. The processes differ: Michigan, Missouri, and North Carolina evaluate LHDs only and do so on a rotating schedule across multiyear cycles; Washington evaluates LHDs, selected state programs and the state Board of Health, and conducts all reviews during a 5-month period every 3 years to capture system performance. The length of time on site and number of reviewers can vary from one half-day to 5 days, and 2 to 15 reviewers. All of these states emphasize assessing administrative capacity and adequacy of the public health infrastructure.
Most of these states use self-assessment to help the agency prepare for the on-site review. Anecdotal reports and survey data suggest that the amount of time spent on the self-assessment and the seriousness with which the self-assessment is conducted may positively impact the accreditation or performance assessment outcome in terms of expediency and/or need for correction to achieve conformance. Michigan surveyed agencies, reporting that 86 percent of respondents agreed the self-assessment is useful for identifying areas needing improvement.11 In the same survey, 84 percent of respondents reported that the self-assessment is useful for preparing for on-site review and 76 percent reported that the self-assessment is a catalyst for prereview consultation.11 These data were used previously by Michigan to validate the usefulness of self-assessment as a tool for preparing for the on-site review and identifying areas needing improvement.
As the self-assessment determines the LHD's readiness, the on-site review serves as the springboard for quality improvement and provides the venue to evaluate conformance with standards and measures. The on-site review is used to conduct interviews and examine agency documents and policies. It also provides a forum for discussion about conformance and best practices.
No state holds site visits more often than every 3 years. All states evaluate the on-site review process and are concerned with the challenge of reviewer consistency in evaluating conformance with standards. To address this challenge, increase consistency, and improve the review process, all of these states provide ongoing training for reviewers and the agency undergoing review. Missouri uses a manager to ensure the site-visit consistency. Washington uses an established interrater reliability process with mock evaluations, and Michigan and North Carolina use various interrater reliability processes, such as reviewer observation or job shadowing, depending upon the need.
Coordinating the on-site review process, including scheduling and making logistical arrangements, is labor intensive. In most states, a third party coordinates the review process and handles related activity. This third party is typically a public health institute or nongovernmental entity, such as a private contractor. Reasons for this vary, but include aspects of neutrality and technical expertise.
It appears that both the voluntary and mandatory approaches have the potential for full participation, which may have positive implications for a national voluntary system of accreditation for local and state public health entities. Michigan and North Carolina require participation in accreditation or performance assessment and the associated on-site review process; in Missouri and Washington, it is voluntary. Both Michigan and Washington have achieved 100 percent participation in their programs, although one is mandatory and the other voluntary. Missouri and North Carolina expect to achieve full participation. Of these states, none charges a fee specifically for the on-site review, although Missouri does charge an application fee for its voluntary accreditation program.
An important consideration for any accreditation or performance assessment program is reviewer knowledge, experience and skills, and organizational affiliation. These reviewers are selected by various methods, and their organizational affiliations and educational backgrounds vary. Michigan uses only state department reviewers—Missouri specifically excludes state employees from participation. Washington uses private contractors and North Carolina uses trained peer volunteers selected by an independent accreditation administrator. All of the states have utilized peer reviewers at some point in their processes. Reviewers typically hold advanced degrees, have extensive state or local public health experience, and are specialists in their field.
Other important components of the on-site review process in all of these states are the exit interview and the on-site review findings reports. Exit interviews are mandatory and serve as forums where both strengths and weaknesses are discussed. Written reports of the on-site review findings are provided to all reviewed agencies. If standards are not met, three states require the reviewed agency to submit a corrective plan of action and be re-reviewed for conformance.
All states evaluate their on-site review process. Their ongoing evaluation activities continue to demonstrate the critical importance of the face-to-face exchange between reviewers and participants. Michigan's survey data suggest that success of the face-to-face exchange (as viewed by LHDs) depends largely upon reviewer skills, knowledge, attitude, beliefs, and behavior.11 The survey results found marked differences between reviewer and LHD perception with regard to how well the face-to-face exchanges were occurring. For example, when asked, “Do program reviewers have a good understanding of accreditation standards?” 50 percent of LHD respondents indicated “no” while 100 percent of reviewer respondents indicated “yes.”11 When asked, “Do reviewers apply accreditation standards the same way?” 80 percent of LHD respondents indicated “no” while 70 percent of reviewer respondents indicated “yes.”11 In addition, when asked, “Do program reviewers apply accreditation standards the same way at each LHD?” 40 percent of LHD respondents indicated “no” while 90 percent of reviewer respondents indicated “yes.”11 In spite of these differences, when asked the question “Is the on-site review an opportunity for constructive program-related dialogue?” 70 percent of LHD respondents indicated “yes” and 90 percent of reviewer respondents also indicated “yes.”11 The data suggest that differing perceptions exist depending on whether one is a reviewer or an LHD and that successful on-site review processes hinge on positive, productive, and appropriate interaction between reviewers and state/local participants.
All states in the MLC conduct accreditation/performance assessments to measure LHD organizational capacity and performance. In addition, these states also are conducting or exploring the use of accreditation or performance assessments to measure state health agency performance and capacity. Specifically, Washington has been conducting a state performance capacity assessment since 2002; Missouri completed an analysis of the state system in 2005, using the National Public Health Performance Standards and is exploring next steps; North Carolina initiated a state assessment in 2006; and Michigan began to explore development of a state process in the fall of 2006. Each state uses or plans to incorporate an on-site review component with respect to measuring state agency performance.
Illinois, although an MLC state, was not included in this study because it does not conduct on-site reviews for LHD certification. In the past, Illinois Department of Public Health Regional Health Officers conducted on-site reviews with certified LHDs for the purpose of performing midterm reviews of LHD compliance with Illinois certification standards. Illinois is exploring the future function and structure of on-site reviews. Future research into this topic should take into account the Illinois experience as it unfolds.
The examination of accreditation/performance assessment programs in Michigan, Missouri, North Carolina, and Washington has implications for other states and an evolving national accreditation model. When constructing the framework for state/local and/or national accreditation or performance assessment initiatives, several key concepts should be considered.
The MLC states in this study have found including on-site review and self-assessment components vital to their ability to assess accountability and performance of LHDs. According to their experiences, on-site review provides the venue to assess, observe, interview, review, evaluate, and/or survey a local/state agency or program regarding its ability to meet a set of public health standards. This process opens up a dialogue that allows LHDs to amplify, clarify, and verify their activities. It is expected that over time this dialogue will build trust between LHDs and their reviewers. That potential hinges on positive, productive, and appropriate interaction between public health professionals. Therefore, to maximize buy-in to an assessment or accreditation program, state reviewing entities should address reviewer recruitment, selection, skill set development, training, and interrater reliability, and ensure that LHD perceptions of reviewers are in line with reviewers' own perceptions of their roles and abilities.
Architects for new accreditation or assessment initiatives should consider establishing accreditation on-site reviews at both the state and the local levels to promote the support and collaboration between state and local public health in each state. Advancing public health for all citizens requires a partnership—a systems approach, which requires assessment of both state and local performance focusing on improved public health outcomes.
Other essential activities related to the on-site review process should include a focus on continuous quality improvement of the review process itself, including the reviewers' and the LHD's experiences during the review. Accreditation and performance assessment are living processes. All MLC states have learned that accreditation/performance assessment programs—including the site visit component of such programs—must evolve and continuously improve pursuant to evaluation results, new information, and changes in public health practice or priorities.
On the basis of their experiences, the states in this study found that standards used for on-site reviews are best based on the three core functions of public health—assessment, policy development, and assurance, the Ten Essential Services15 and, if appropriate, the Operational Definition of a Functional Local Health Department.4 The MLC states used these resources to varying degrees, in alignment with national efforts to measure public health performance.
One of the primary purposes of on-site review, and an added value in several of the states, is that entities responsible for accreditation or assessment were able to identify, collect, and disseminate effective or model practices. Sharing effective practices can aid in efforts to improve the health status of communities and service areas and improve processes within state and/or LHDs. In some cases, as in North Carolina's, using the services of peer reviewers served the dual function of providing for a competent, experienced review process, and allowing the reviewers themselves to learn from the process of reviewing other LHDs and bringing lessons back to their home agencies.
An objective accreditation or assessment process to evaluate state and local public health performance is critical to improving the protection of the public's health, assuring governmental sector accountability, and building and maintaining public health capacity across the nation. According to the states forging the way on this issue, using on-site reviews is a valuable component of this process and should be considered by all states planning to develop accreditation programs.
© 2007 Lippincott Williams & Wilkins, Inc.