The possibility of mitigating healthcare–associated infections (HAIs) has been recognized at least since the Crimean War in 1855, when Nightingale1 reported that poor sanitation and the rapid spread of infection from patient to patient were the cause of death for large numbers of soldiers, especially where washing facilities were distant from patient care. Nightingale’s observations and attempts to prevent deaths by containing infection are among the earliest recorded instances of observational data stimulating improvements in the structure, processes, and outcomes of care.
Medical care in the 20th and 21st centuries is far more complex than it was in the 19th century, as are the causes of HAIs. In particular, 4 broad types of care have been found to account for the observed growth in HAI rates2: (1) greater use of antibiotics, which leads to antimicrobial resistance; (2) increased technological complexity and invasive medical procedures, which affect natural bodily defenses; (3) large numbers of patients who previously would have died from organ failure or diseases such as cancer, but who are instead living with compromised defenses; and (4) more widespread use of devices that penetrate the skin and disrupt natural barriers, thereby predisposing patients to infection in hospitals, ambulatory and home settings, and nursing homes.
Although conventional wisdom had been that HAIs were an expected complication of illness, a paradigm shift occurred as evidence accumulated showing that many (20%–70%) hospital based and other HAIs are preventable.3,4 Key interventions have focused on limiting the initial and ongoing use of devices known to be associated with infections (eg, central line catheters, urinary catheters), and encouraging the use of techniques known to help prevent infections (eg, hand washing, grouping together or “bundling” evidence-based medical care practices rather than performing each action individually).5,6
By 2008, the growth of HAIs (and their associated costs) stimulated sufficient interest that a Government Accounting Office (GAO) report called for improved coordination of HAI prevention efforts.7In response, the US Congress created an HHS Steering Committee for the Prevention of Healthcare-associated Infections, which consisted of a broad group of clinicians, scientists, and public health leaders who convened to coordinate and maximize the efficiency of HAI prevention efforts in the federal government.8 In 2009, HHS released the HHSHAI Action Plan, which was intended to serve as a roadmap and enhance collaboration and coordination among federal agencies to strengthen the impact of national efforts to address HAIs.9
The problem of HAIs reflects a multitude of complex issues: HAIs comprise many different diseases having multiple causes and occurring in diverse clinical practice settings. The costs of HAIs are borne not only by individual patients and their families, but a host of federal, state, regional, local, stakeholders who are involved in providing, paying for, and advocating for care. Solutions require a range of expertise in diverse areas such as basic science, epidemiology, information science, and patient safety, among others; and, to be fully effective, these solutions must be applied in a variety of settings ranging from hospitals to individual homes. Effective solutions also have to coordinate the efforts of multiple agencies, prioritize hundreds of recommendations, and integrate ambitious, costly, but not yet fully developed, data and monitoring systems.
To optimize understanding of the Action Plan’s effectiveness at eliminating HAIs and to identify challenges to success, HHS requested an independent, longitudinal, and formative program evaluation, which followed the path of an earlier successful national evaluation of the patient safety movement.10 An independent evaluation ensures objectivity as a means to improve decision making and policy; a longitudinal evaluation supports the need for data collection and analysis over time; and a formative approach provides ongoing feedback about whether the program is achieving desired outcomes and selecting effective strategies to achieve those outcomes.11–14
In 2009, under a multi-HHS agency agreement, IMPAQ International and the RAND Corporation contracted with HHS to serve as the evaluation team for HHS’s National Action Plan to Prevent Healthcare-associated Infections: Roadmap to Elimination.9,15 This 4-year evaluation, one of the most comprehensive ever commissioned by a federal agency, was designed to record the context, scope, design, decisions, strategies, and outcomes of the Action Plan and to provide HHS with ongoing feedback to inform the Action Plan’s future directions.
The purpose of this article is to highlight key elements of the approach selected for this large program evaluation, the reasons for those choices, and key lessons learned about evaluating a large, federal, combined, and clinical and policy program in real time. The core findings of the evaluation and associated analyses will be presented in subsequent articles in this issue.
We first introduce the Context-Input-Process-Product (CIPP) framework,16,17 which provides a flexible means to structure the story of the HAI Action Plan’s development, implementation, and impacts. Next, we describe an HAI prevention system framework,18 which we used in conjunction with the CIPP model, to examine the primary functions and properties of the healthcare system that, together, work to prevent and mitigate HAIs. The properties characterize the dynamics of the people and policies that impact the healthcare system and can motivate change in the system functions. We then provide a brief overview of the ways in which this framework was applied in the evaluation of the Action Plan.
The CIPP model16used in the evaluation provides a structured, yet flexible design for program evaluation, and was previously used to evaluate AHRQ’s patient safety portfolio.10 The CIPP model offers a means to evaluate the Action Plan’s many moving parts: it includes a structure for tracking the components of the program and the relationship between components and how they change over time.
There are 4 core components of CIPP as used in our research16,18: (1) the Context in which HAIs and efforts to mitigate them developed; (2) the Inputs and decisions made about how to leverage resources, infrastructures, and relationships to select the set of activities for implementation; (3) Processes of implementation for selected activities; and (4) Products and outcomes. The model uses a systematic path to delineate useful information for judging ongoing relationships between activities and decisions. The model posits an ongoing cycle of activities which influence decisions affecting the selection and implementation of subsequent activities, which in turn influence future decisions and activities. The evaluation team examined each component of the Action Plan through the lenses of the CIPP model and focused the evaluation on both the aggregate effect of the Action Plan components and the ways in which each CIPP component influences the next.
The upper cell of Figure 1 depicts 3 characteristics of context that were examined in the evaluation: needs, resources, and challenges. The policy environment for the HAI Action Plan includes ongoing challenges in HAI prevention as well as congressional and legislative intent.8 The context evaluation examines the circumstances that stimulated the creation of the Action Plan to assess whether goals set by the HHS Steering Committee and various working groups were responsive to the context.
The goals establish a foundation for the selection of inputs for the Action Plan (second cell of Fig. 1). Inputs include the strategies, assets (including projects, activities, programs, and the allocation of resources), and expertise (including leadership) needed to support the goals. The input evaluation also considers the processes used to select inputs and the alternatives considered.
Although the input evaluation focuses on the selection of inputs for the Action Plan, the process evaluation considers the implementation of selected HAI projects, programs, and other inputs (third cell of Fig. 1). It focuses on progress made in HAI prevention, challenges associated with implementation strategies and models, and unintended consequences of this process.
The process evaluation leads to the lower cell of Figure 1, which focuses on the products or outcomes that result from implementation. The ultimate outcomes of the Action Plan and its efforts to prevent HAIs are the reduction and elimination of HAIs and their associated burden on individuals and society. Other important outcomes include the increased capacity of the healthcare system, in general and at different levels, to continually drive and sustain such improvements.
The HAI Prevention System Framework
The CIPP model consists of 4 components (Context, Inputs, Process, Product) that are useful for understanding the ongoing cycle of activities that create, support, and sustain the Action Plan. However, because these components are quite broad, we also developed a system framework specific to HAI prevention to provide a more-focused categorization within the CIPP model and to facilitate understanding of the data collected for the evaluation. The system framework has 2 main parts: system functions and system properties, which are presented in Table 1.
Early in the evaluation, we identified 4 primary functions that, acting together, enable the healthcare system to prevent and mitigate HAIs. These functions are substantively similar to the “pillars of HAI elimination” identified by subsequent authors.19 These elements represent the spectrum of functions to be addressed by the Action Plan. The 4 system functions are:
* Infrastructure Development—governance and structures at various levels of the healthcare system that support adoption of HAI prevention practices.
* HAI Data and Monitoring—the development, collection, validation, and use of HAI data for outcomes monitoring and systems improvement.
* Knowledge Development—the range of HAI-related research, from basic science and epidemiology to development of prevention practices and implementation science.
* Adoption of HAI Prevention Practices—the implementation, use, and sustainability of HAI prevention practices in healthcare and community settings.
The functions are relevant to all components of the CIPP model. For example, all 4 functions should be addressed by the goals set by the Action Plan (Context evaluation), the inputs selected to support the Plan (Input evaluation), the Process, and the Product evaluations.
Although it is important to understand what the Action Plan achieved in terms of addressing the system functions, it is also important to understand how the Plan was developed and implemented and, in particular, whether the Plan sought to ensure that the parts and stakeholders of the system work in concert to drive and support effective changes for HAI improvement down to the point of care for individual patients. On the basis of the results of the context and input evaluations during the first year of the evaluation,18 we identified 5 properties or dynamics that can impact the healthcare system and affect the Action Plan’s ability to foster change and improvement in the system functions:
* Prioritization—setting priorities in various functional and other areas of HAI prevention to guide efforts (eg, selection of types of infections, healthcare settings, and targets and metrics to address).
* Coordination and Alignment—coordinating HAI prevention activities across stakeholders and levels of the healthcare system to leverage efforts and reduce redundancies and inconsistencies.
* Accountability and Incentives—identifying clear responsibility for different tasks in HAI prevention (at different levels of the healthcare system), mechanisms for monitoring and feedback, and corrective action for goals not met.
* Stakeholder Engagement—communicating and developing trust among stakeholders to solicit support and action for HAI prevention.
* Resources—identifying funding and other material support for HAI prevention at any level of the healthcare system, including effective targeting, timely use, and duration and predictability of resources over time.
The system properties reflect dimensions frequently discussed in the literature on systemic change, innovation, and reform.20–23
The key questions that guided the analysis of the system properties are shown in Table 2. As with the system functions, the system properties apply to all components of the CIPP model. For example, the Process evaluation considered whether programs implemented under the Action Plan reflected progress in setting priorities, and were carried out with sufficient stakeholder engagement.
Multiple, Multilevel Sources of Data
In implementing the CIPP model, the evaluation team analyzed objective data from a variety of sources (Table 3), including documents, literature, and an inventory of federal agency programs and projects, as well as subjective inputs such as the results of stakeholder interviews. We developed a set of criteria for inclusion and exclusion, and then reviewed, cataloged, and coded relevant items identified; a supplemental Online Methods Appendix, (Supplemental Digital Content, http://links.lww.com/MLR/A592) provides additional detail about these sources and the review and coding processes. In aggregate, these data sources were designed to provide insights into the various levels of governmental jurisdiction affected by HAIs and potentially affected by the HAI Action Plan, including HAI reduction efforts attempted or conducted by federal agencies, regions, states, and local entities.
In addition, throughout the evaluation period of 2009 through 2013, the evaluation team maintained communication with HHS leaders and other stakeholders both to convey information about the evaluation’s methods and preliminary findings and to learn about leaders’ and stakeholders’ evolving priorities and challenges.
The team held biweekly meetings with HHS and the HHS Team Leader for Healthcare-associated Infection Prevention, Management Analyst, to discuss the progress of the evaluation as well as challenges and options for solving those challenges. The evaluation team also met monthly with the Action Plan’s Federal Agency Working Group, which included representatives from AHRQ, the Centers for Disease Control, the Centers for Medicare & Medicaid Services (CMS), and the Office of the Assistant Secretary for Health. The evaluation team also met at least annually with the Deputy Assistant Secretary for Healthcare Quality, Office of Healthcare Quality, within HHS, Office of the Secretary, Office of Public Health and Science.
In this section, we highlight some of the key ways in which we applied the CIPP model and system framework to the HAI Action Plan. This discussion focuses on the evaluation approach, whereas other articles in this special issue will discuss the findings from the evaluation.24–31
A CIPP evaluation begins with an exploration of the extent to which the goals of the program address the context in which it was introduced. For Action Plan goals to be responsive to context, they needed to account for multiple historical antecedents, including the emerging patient safety and consumer movements32 with their emphases, respectively, on the need for new structures for medical care delivery33 and advances in data and monitoring resources to support transparency,34 as well as the advancing science of HAIs with its emphasis on building the scientific evidence base and developing evidence-based guidelines.35–37 Of primary importance in assessing the Action Plan goals were the 2 major issues identified in the 2008 GAO report: the need for prioritization to address the multitude of clinical practices for preventing HAIs and related adverse events; and the need for greater consistency and compatibility of the HAI data collected by HHS to increase information available about HAIs, including reliable national estimates of the major types of HAIs.The evaluation looked for evidence that Action Plan goals addressed these and related contextual factors, and for evidence that established goals addressed the range of system functions and properties.
At the time the Action Plan was released in 2009, there were a large number of potential “inputs” that could be used to model or inform the selection of Action Plan infrastructures or activities. These included the infrastructure of the lead Action Plan agencies, an existing research base, scores of recommended HAI prevention practices, and multiple monitoring systems for tracking HAI rates. Action Plan leaders had to decide which of these inputs would be useful in developing a more effective plan for HAI reduction.
The evaluation team looked for evidence that inputs selected for the Action Plan addressed the goals set forth in the plan, and, by extension, the contextual factors that led to the Action Plan. The evaluation also considered whether inputs were selected with an eye toward implementation; that is, sufficient resources and structures would be available to implement selected programs and activities. With coordination of HAI services through HHS being a mandate of the plan, the evaluation looked for evidence that the Action Plan leveraged opportunities for prioritization, alignment, accountability, stakeholder engagement, and resources in its decisions about which activities should move toward implementation.
The process evaluation considered the ways in which selected HAI projects and programs were implemented and the general processes and strategies used by federal agencies during implementation. The evaluation also sought to identify which HAI resources helped or hampered implementation and which factors led to success with, or barriers to, implementation. The evaluation also asked how the Action Plan could have done a better job supporting implementation of activities—especially so that the activities implemented might have been more responsive to goals. The process evaluation gave particular attention to system properties and dynamics—prioritization of efforts, coordination and alignment, stakeholder engagement, accountability and incentives, and use of resources—in stimulating change in HAI prevention functions across levels of the healthcare system down to the point of patient care.
Product evaluation focuses on the data that document the effect of the Action Plan on HAI prevention capacity and sustainability (eg, adoption of practices by key stakeholders involved in HAI prevention and elimination, such as federal and state government agencies; private trade, professional, and interest organizations; healthcare organizations and providers; and patients) as well as effects on HAI incidence and rates. These effects, and the knowledge gleaned from analyzing them (eg, guidelines and best practices for preventing and eliminating HAIs, identification of gaps in HAI science requiring policy and research attention, and dissemination of insights to various channels and audiences) are expected to feed back into policy and decision making in the other 3 tiers of the CIPP model to further guide and effect change in HAI improvement.
The results of the product evaluation are described elsewhere in this issue and address what has happened as a result of the Action Plan.29–31 The evaluation also asks whether the achievement of desired goals might have been more effective and whether any unintended adverse consequences could have been avoided.
The problem of HAIs is complex, and an effective solution will have to be similarly complex and well orchestrated, requiring the involvement of basic scientists and epidemiologists, information scientists, patient safety experts, consumer advocates, clinicians and staff associated with diverse clinical practice settings, and a host of federal, regional, state, and local stakeholders. The Action Plan began its mission of eliminating HAIs by acknowledging the complexity of the task. The Action Plan continues today as a work-in-progress, responding to challenges as they arise.
Strengths of the Evaluation Approach
The CIPP model has several features that make it advantageous for an evaluation such as this. First, it allowed the evaluation to account for the full scope and complexity of the Action Plan. Although randomized control designs are desirable for many types of evaluations, particularly studies to assess the effects of single or focused groups of interventions on specified outcomes, such a design is not appropriate for the overall evaluation of a complex and evolving program that incorporates multiple conceptual and practical components. Evaluation of a large program such as the Action Plan is better served by a broader evaluation approach such as CIPP16 that can both provide a high-level view of progress and challenges associated with the plan while also allowing for multiple numerous focused assessments of specific program activities. These focused assessments can use a variety of sampling strategies, data collection methods, and analytic approaches, depending on the nature of the activity being assessed. Second, the model is longitudinal, that is, it allowed for ongoing, iterative data collection and assessment over time. The evaluation sought to systematically collect quantitative and qualitative data from multiple sources to capture the context, inputs, processes, and products of the Action Plan’s work and to link these components to tell the story of what the Action Plan has successfully accomplished and what might have been done better. Consistent with CMS’s recent focus on rapid cycle evaluation, this evaluation was conducted concurrent with program implementation.
Third, CIPP supported the development of data and analyses that are useful for formative evaluation, which provided program leaders with feedback to inform ongoing decisions. For example, the evaluation team informed Action Plan leadership about the need for additional data to support a comparative analysis of longitudinal HAI rates, which facilitated additional data documentation and sharing for the analyses. As noted by Farley,10 program evaluation emphasizes both what and how activities happen and the contextual factors that influence what occurred. A formative evaluation in which feedback is iteratively shared with program leaders is particularly useful to an agency such as HHS that is responsible for addressing the perspectives of multiple stakeholders and for incorporating these perspectives into regular status reports and funding requests to GAO and Congress.39
The system framework provided an additional structure for categorizing and understanding areas of interest specific to HAIs, such as trends in HAI data and monitoring and adoption of HAI prevention practices. The 5 system properties (prioritization, coordination, accountability, stakeholder engagement, and resources) allowed the evaluation team to understand not only what the Action Plan was doing but whether it facilitated change in the healthcare system to address the problem of HAIs, and, if so, how it facilitated change and whether changes were consistent with the goals of the Action Plan.
The evaluation appropriately involved substantial stakeholder engagement, which ranged from longitudinal interviews with stakeholders internal and external to HHS, to interviews with regional, state, and local experts, as well as regular meetings with HHS agency leaders and participation of evaluation team members in HHS activities and events. The use of a broad set of data collection tools allowed the evaluation team to gather information from and relevant to a wide range of stakeholders, including consumers, patients and their caregivers, healthcare workers, managers, and policymakers spanning local, state, regional, and federal levels.
Although the CIPP model and the associated system framework are useful for large program evaluations, substantial resources must be available for capturing large amounts of data and changes in the data across time, space, and domains. The model should not be applied without consideration of the resources required to account for many changing components and the ways in which they interact to facilitate change. Research teams using this approach should consider developing a tailored framework such as the system framework used in this study to help categorize and interpret the data.
Although the evaluation team had unrestricted access to many experts associated with HHS, access to certain data elements was restricted or delayed until HHS completed internal plans and messaging strategies. On the whole, however, the evaluation team was granted adequate access so that we were able to implement planned evaluation activities.
Although the approach provides a very finely grained way to look at the program being evaluated and to describe its many components, the analytical portion of the model can be challenging because there is not 1 “right way” to assess whether a particular goal, input, process, etc. is “good” or “bad.” Thus, it is important for teams using the model to ensure that they identify clear and sufficiently detailed assessment approaches to use with CIPP in advance of analysis, including quantitative outcome measures (such as HAI rates in this evaluation), if available, to supplement qualitative analyses.
Finally, although the CIPP program evaluation model provides an integrative view of how change occurs even when it involves the integration of multiple moving parts, it is not designed to evaluate the cost-effectiveness, efficiency, success, or shortcomings of an individual intervention. Nonetheless, the use of a broad evaluation method such as CIPP does not preclude additional, focused analyses of specific interventions using another methodology.
The articles that follow in this volume illustrate how the CIPP model, together with the system framework, can be used to tell the story of the HAI Action Plan and its progress in meeting implementation goals and outcomes, including reduction in national HAI rates and sustainable system capacity. Understanding what has and has not worked is important both to facilitate future progress toward the goal of eliminating HAIs and to understand the role of program evaluation in supporting that progress. With coordination and alignment becoming increasingly important across large programs within healthcare, education, technology, transportation, and the military, examining and deriving lessons from program evaluations such as this will inform the policy community about what works and why, and how future complex, large-scale evaluations should be conducted.
1. Nightingale F.Florence Nightingale: Measuring Hospital Care Outcomes: Excerpts From the Books Notes on Matters Affecting the Health, Efficiency, and Hospital Administration of the British Army Founded Chiefly on the Experience of the Late War, and Notes on Hospitals.1999.Oakbrook Terrace, IL:Joint Commission on Accreditation of Healthcare Organizations;41–45.
2. Weinstein R.Nosocomial infection update.Emerg Infect Dis.1998;4:416–420.
3. Scott RD.The Direct Medical Costs of Healthcare-associated Infections in US Hospitals and the Benefits of Prevention.2009.Atlanta, Georgia:National Center for Preparedness, Detection and Control of Infectious Diseases (US), Division of Healthcare Quality PromotionAvailable at: http://stacks.cdc.gov/view/cdc/11550/Accessed
. Accessed May 14, 2013.
4. Pronovost P, Needham D, Berenholtz S, et al..An intervention to decrease catheter-related bloodstream infections in the ICU.N Engl J Med.2006;355:2725–2732.
5. .Medicare Patient Safety Monitoring System (MPSMS) Final Report.2009.Rocky Hill, CT:Qualidigm.
6. Ranji SR, Shetty K, Posley KA, et al.Shojania KG, McDonald KM, Wachter RM, Owens DK.Prevention of healthcare-associated infections.Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies, Technical Review 9 (prepared by the Stanford University-UCSF Evidence-based Practice Center under Contract No. 290-02-0017), AHRQ Publication No. 04(07) 0051-6. Vol. 6.2007.Rockville, MD:Agency for Healthcare Research and Quality;107pgs.
7. US Government Accountability Office: leadership needed from HHS to prioritize prevention practices and improve data on these infections. GAO, 2008.
8. .Healthcare Associated Infections: A Preventable Epidemic.2008.Washington, DC:House Committee on Oversight and Government Reform.
9. .Action Plan to Prevent Healthcare-associated Infections.2009.Washington, DC:US Department of Health and Human Services.
10. Farley DO, Battles JB.Evaluation of the AHRQ patient safety initiative: framework and approach.Health Serv Res.2009;44pt 2628–645.
11. Crawford MA, Woodby LL, Russell TV, et al..Using formative evaluation to improve a smoking cessation intervention for pregnant women.Health Commun.2005;17:265–281.
12. Kochevar LK, Yano EM.Understanding health care organization needs and context. Beyond performance gaps.J Gen Intern Med.2006;21suppl 2S25–S29.
13. Stetler CB, Legro MW, Wallace CM, et al..The role of formative evaluation in implementation research and the QUERI experience.J Gen Intern Med.2006;21suppl 2S1–S8.
14. Westbrook JI, Braithwaite J, Georgiou A, et al..Multimethod evaluation of information and communication technologies in health in the context of wicked problems and sociotechnical theory.J Am Med Inform Assoc.2007;14:746–755.
16. Stufflebeam DStufflebeam DL, Madaus GF, Kellaghan T.The CIPP model for evaluation.Evaluation Models: Viewpoints on Educational and Human Services Evaluation.2000.Boston, MA:Kluwer Academic Publishers;279–317.
17. Stufflebeam DL.Educational Evaluation and Decision Making.1971.New York:Peacock Press.
18. Mendel P, Weissbein D, Weinberg D, et al..Longitudinal Program Evaluation of the HHS Action Plan to Prevent Healthcare-associated Infections: Year 1 Report.2011.RAND:Santa Monica, CA.
19. Cardo D, Dennehy PH, Halvserson P, et al..Moving toward elimination of healthcare-associated infections: a call to action.Infect Control Hosp Epidemiol.2010;31:1101–1105.
20. Ferlie EB, Shortell SM.Improving the quality of health care in the United Kingdom and the United States: a framework for change.Milbank Q.2001;79:281–315.
21. Greenhalgh T, Robert G, Macfarlane F, et al..Diffusion of innovations in service organizations: systematic review and recommendations.Milbank Q.2004;82:581–629.
22. Mendel PJ, Meredith LS, Schoenbaum M, et al..Interventions in organizational and community context: a framework for dissemination in health services research.Adm Policy Ment Health.2008;35:21–37.
23. Damschroder L, Aron DC, Keith RE, et al..Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science.Implementation Sci.2009;4:50.
24. Mendel P, Siegel S, Leuschner KJ, et al..The national response for preventing healthcare-associated infections: infrastructure development.Med Care.2014;522 suppl 1S17–S24.
25. Kahn KL, Weinberg DA, Leuschner KH, et al..The national response for preventing healthcare-associated infections: data and monitoring.Med Care.2014;522 suppl 1S25–S32.
26. Kahn KL, Mendel P, Leuschner KJ, et al..The national response for preventing healthcare-associated infections: research and adoption of prevention practices.Med Care.2014;522 suppl 1S33–S45.
27. Siegel S, Kahn KL.Regional interventions to eliminate healthcare associated infections.Med Care.2014;522 suppl 1S46–S53.
28. Fischer L, Ellingson K, Jernigan J, et al..The role of the public health analyst in the delivery of technical assistance to state health departments for healthcare-associated infection prevention.Med Care.2014;522 suppl 1S54–S59.
29. Cataife G, Weinberg DA, Kahn KL.The effect of surgical care improvement project (SCIP) compliance on surgical site infections (SSI).Med Care.2014;522 suppl 1S66–S73.
30. Weinberg DA, Kahn KL.An examination of longitudinal CAUTI, SSI, and CDI rates from key HHS data systems.Med Care.2014;522 suppl 1S74–S82.
31. Mendel P, Weinberg DA, Gall EM, et al..The national response for preventing healthcare-associated infections: system capacity and sustainability for improvement.Med Care.2014;522 suppl 1S83–S90.
33. Kohn LT, Corrigan JM, Donaldson MS.To Err is Human.1999.Washington, DC:National Academy Press.
35. McDonald KM, Sundaram V, Bravata DM, et al. Care Coordination. Vol 7 of: Shojania KG, McDonald KM, Wachter RM, Owens DK, editors. Closing the Quality Gap: A Critical Analysis of Quality Improvement Strategies. Technical Review 9
(Prepared by the Stanford University-UCSF Evidence-based Practice Center under contract 290-02-0017). AHRQ Publication No. 04(07)-0051-7. Rockville, MD: Agency for Healthcare Research and Quality; June 2007.
36. Yokoe D, Classen D.Improving patient safety through infection control: a new healthcare imperative.Infect Control Hosp Epidemiol.2008;29:S3, 1–11.
37. Yokoe DS, Mermel LA, Anderson DJ, et al..A compendium of strategies to prevent healthcare-associated infections in acute care hospitals.Infect Control Hosp Epidemiol.2008;29:S12–S21.
38. .Continuing leadership needed from HHS to prioritize prevention practices and improve data on these infections, March 18, 2009. Available at: http://www.gao.gov/new.items/d09516t.pdf
. Assessed March 25, 2013.
39. Howell EM, Yemane A.An assessment of evaluation designs: case studies of 12 large federal evaluations.Am J Eval.2006;27:219–236.