Skip Navigation LinksHome > July 2014 - Volume 89 - Issue 7 > Advancing the Use of Checklists for Evaluating Performance i...
Academic Medicine:
doi: 10.1097/ACM.0000000000000285
Commentaries

Advancing the Use of Checklists for Evaluating Performance in Health Care

Rosen, Michael A. PhD; Pronovost, Peter J. MD, PhD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Rosen is assistant professor, Armstrong Institute for Patient Safety and Quality, Johns Hopkins Medicine, and assistant professor, Department of Anesthesiology and Critical Care Medicine, Johns Hopkins University, Baltimore, Maryland.

Dr. Pronovost is senior vice president of patient safety and quality and director, Armstrong Institute for Patient Safety and Quality, Johns Hopkins Medicine, and professor, Department of Anesthesiology and Critical Care Medicine, Department of Surgery, and Department of Health Policy and Management, Johns Hopkins University, Baltimore, Maryland.

Funding/Support: None reported.

Other disclosures: Dr. Pronovost reports receiving grant or contract support from the Agency for Healthcare Research and Quality (AHRQ), the Gordon and Betty Moore Foundation (research related to patient safety and quality of care), the National Institutes of Health (acute lung injury research), and the American Medical Association (research related to improving blood pressure control); honoraria from various health care organizations for speaking on quality and patient safety (the Leigh Bureau manages these engagements); book royalties from the Penguin Group; stock and fees to serve as a director for Cantel Medical; and fees to be a strategic advisor to the Gordon and Betty Moore Foundation. Dr. Pronovost is a founder of Patient Doctor Technologies, a startup company that seeks to enhance the partnership between patients and clinicians with an application called Doctella. Dr. Rosen reports receiving funding from AHRQ, the Gordon and Betty Moore Foundation, VHA Inc.’s Center for Applied Healthcare Studies, and Gradian Health Systems, LLC (all research related to patient safety and quality of care).

Ethical approval: Reported as not applicable.

Correspondence should be addressed to Dr. Pronovost, 750 E. Pratt St., 15th Floor, Baltimore, MD 21202; telephone: (410) 502-6127; e-mail: ppronovo@jhmi.edu.

Collapse Box

Abstract

Patients frequently do not receive recommended therapies because performance expectations are often unclear. Clinical guidelines provide exhaustive details and recommendations, but this information is not formatted in a way that supports decision making or bedside translation of therapies. When performance expectations are unclear, it is difficult for clinicians to assess their own or others’ competence. Checklists offer hope because they codify interventions, remove ambiguity, and increase reliability of care processes. Schmutz and colleagues developed a robust methodology to create a checklist for evaluating clinical performance, which is described in this issue of Academic Medicine.

In this commentary, the authors offer several points to consider as checklists become more prevalent in medical education and clinical practice. First, culture is a much larger part of the equation than the checklist; understanding what all stakeholders expect to gain will help engage checklist use. Second, the construction, validation, and maintenance of checklist evaluation tools is labor intensive, requiring innovative dissemination approaches to ensure maximum access and use of checklists. Third, integrated systems that evaluate technically specified and adaptive performance are needed because some aspects of clinical performance cannot be captured on a checklist. Fourth, checklists provide an opportunity to evaluate and improve an individual’s performance concurrently with the context in which it is delivered. A tighter connection between education and training activities and process improvement strategies will accelerate improvements in safety and quality. Schmutz and colleagues have provided advancements in performance evaluation that will help health care achieve higher-quality and safer care.

Patients frequently do not receive recommended therapies because performance expectations are often unclear. Current practices for summarizing evidence into clinical guidelines undoubtedly contribute to this problem. Clinical guidelines are usually scholarly yet too often lack actionable guidance to translate care at the bedside. Guidelines do not prioritize the often-exhaustive number of decisions and actions, and they are usually ambiguous. Further, these guidelines do not recommend therapies when the evidence is incomplete, even though a clinician must prescribe some therapy. How guidelines are laid out (i.e., formatted) violates the principles of usability by offering long lists of steps with conditional probabilities that do not support decision making, particularly when clinicians are under time pressures and other stressors.1 Further, unclear performance expectations, along with general self-assessment biases, likely underlie the discrepancies found between external observations of physician competence and physicians’ self-assessment of their competence.2

Checklists offer hope. Well-constructed checklists codify interventions, remove ambiguity, and increase the reliability of care processes. In educational settings, checklists can serve not only as evaluation tools but also as a common and easy means of communicating a set of expectations regarding effective performance. Checklists have translated evidence-based and other best practices to the bedside for a wide range of complications and care processes, from central-line-associated bloodstream infection,3 to surgical care,4 and ventilator-associated pneumonia.5 Additionally, evidence shows they have led to reduced mortality.6 However, the process for developing each of these checklists has varied significantly. Schmutz and colleagues7 have developed a robust methodology for creating a checklist that evaluates clinical performance. Their contribution is important and greatly needed. Below, we offer some points to consider as checklists become more prevalent in medical education and clinical practice.

Our experience when implementing checklists in clinical practice indicates that culture matters. The degree to which a checklist influences processes of care and outcomes depends on the attitudes and behaviors of those using the checklist.4,8 Checklists are used—and useful—only if staff believe they will truly change care and improve the outcome. To illustrate, simply mandating the use of a surgical checklist in 133 surgical hospitals in Ontario did not improve outcomes.9 The transformation of culture from I can’t to I can is the larger part of the equation, and checklists are but a small fraction of this equation.

Checklists used in educational assess ments should expect similar effects. A person’s motivation to learn and the expected utility of the learning experience will influence the learning outcomes. Understanding what educators and learners expect to gain from evaluation checklists will be key when designing strategies that engage stakeholders in checklist use. Ultimately, these tools should be introduced with appropriate engagement and cultural change interventions to ensure buy-in.

The process to develop checklists that Schmutz and colleagues articulated is clear, reproducible, and robust. It is also labor intensive, particularly when you consider the potential number of components required to evaluate clinical performance for a condition. Take, for example, the septic shock checklist they developed; there are 33 evaluation items for this one clinical scenario. To yield maximum benefit, the health care community must decide who will develop these checklists and how they will be shared. Perhaps professional societies, accrediting organizations, or large health systems could build, maintain, and share checklists. Ideally, there should be one open-source repository, like the EQUATOR Network,10 which maintains an electronic library of guidelines and checklists for reporting different types of research. The decisions about development, maintenance, and dissemination of checklists will greatly affect the value these tools have for medical education.

Recognizing that much of health care is and will likely remain unspecified, or at least underspecified, is also of great importance. This is true even with diseases like sepsis for which the protocols have higher-than-average levels of specificity. Care teams assemble in real time to communicate, make sense of data, generate hypotheses, make value-based decisions, and solve problems. Checklists are powerful tools for promoting and evaluating specified aspects of care or competence, but other methods (e.g., peer review and collaborative learning networks) are needed to address unknown or less technically prescriptive components of competence, such as introducing everyone in the operating room. Ultimately, a comprehensive multisource assessment system that integrates information about competence in all domains remains a generally unmet goal for health care.

Medical education supports the lifelong process of developing experts. Classically defined, expertise involves fitting or adapting the performance capacities of an individual to the nature of the tasks within a work domain.11 Medical education adapts an individual’s abilities and performance processes by developing in him or her a large, interconnected knowledge base and refining his or her psychomotor and procedural skills. However, the other side of the expertise equation—the nature of the task and the often-uncertain demands it places on professionals—is equally important. Redesigning tasks to eliminate needless complexity and ambiguity can decrease the learning curve for clinicians. For example, most improvement efforts focus on one type of harm, but patients are at risk for multiple harms. Each harm type needs a checklist; each checklist needs multiple items; and some of these items may need to be performed multiple times a day. For example, a patient in intensive care is at risk for over a dozen different harms and would need approximately 200 interventions every day to prevent all of these harms. This would require an unwieldy checklist, relying on the heroism of clinicians to manage it, when it would be more reliable to design safer systems.12 Many of the items on the septic shock checklist could be automated if the electronic medical record was linked to other devices and if clinicians were supported with analytics. In this way, the effectiveness and efficiency of educational processes would be tied to the quality of the work system that is in place to manage, in this case, sepsis.

Checklist developers should examine the work they want to evaluate within the context that it is actually delivered. Doing so would support the evaluation of an individual’s or team’s performance without potential confounders, such as inadequate supplies or faulty equipment. Consequently, the checklist development process represents an opportunity to concurrently examine educational and work practices and performance improvement goals.

Schmutz and colleagues should be congratulated for advancing the science of checklist development. As all of us physicians and physician educators work to meet the broader challenges of implementing rigorous and effective performance evaluation systems, it is important to remember that our goals for health care are to partner with patients and their loved ones to eliminate harm, to optimize outcomes and experience, and to reduce wasted resources and costs. To achieve these goals, clinicians must excel in technical work and teamwork. They must be supported by leaders and a positive culture, and they must have reliable access to well-designed technologies, to helpful tools and clear work processes, to effective learning and development opportunities, and to meaningful and timely feedback on their performance.

Acknowledgments: The authors thank Christine G. Holzmueller for editing the manuscript.

Back to Top | Article Outline

References

1. Klein G. Naturalistic decision making. Hum Factors. 2008;50:456–460

2. Davis DA, Mazmanian PE, Fordis M, Van Harrison R, Thorpe KE, Perrier L. Accuracy of physician self-assessment compared with observed measures of competence: A systematic review. JAMA. 2006;296:1094–1102

3. Pronovost PJ, Goeschel CA, Colantuoni E, et al. Sustaining reductions in catheter related bloodstream infections in Michigan intensive care units: Observational study. BMJ. 2010;340:c309

4. Haynes AB, Weiser TG, Berry WR, et al.Safe Surgery Saves Lives Study Group. Changes in safety attitude and relationship to decreased postoperative morbidity and mortality following implementation of a checklist-based surgical safety intervention. BMJ Qual Saf. 2011;20:102–107

5. Berenholtz SM, Pham JC, Thompson DA, et al. Collaborative cohort study of an intervention to reduce ventilator-associated pneumonia in the intensive care unit. Infect Control Hosp Epidemiol. 2011;32:305–314

6. Lipitz-Snyderman A, Steinwachs D, Needham DM, Colantuoni E, Morlock LL, Pronovost PJ. Impact of a statewide intensive care unit quality improvement initiative on hospital mortality and length of stay: Retrospective comparative analysis. BMJ. 2011;342:d219

7. Schmutz J, Eppich WJ, Hoffmann F, Heimberg E, Manser T. Five steps to develop checklists for evaluating clinical performance: An integrative approach. Acad Med. 2014;89 1005

8. Dixon-Woods M, Bosk CL, Aveling EL, Goeschel CA, Pronovost PJ. Explaining Michigan: Developing an ex post theory of a quality improvement program. Milbank Q. 2011;89:167–205

9. Urbach DR, Govindarajan A, Saskin R, Wilton AS, Baxter NN. Introduction of surgical safety checklists in Ontario, Canada. N Engl J Med. 2014;370:1029–1038

10. EQUATOR Network. . Enhancing the QUAlity and Transparency Of health Research [Homepage]. http://www.equator-network.org/home/. Accessed March 25, 2014

11. Ericcson KA, Lehman AC. Expert and exceptional performance: Evidence of maximal adaptation to task constraints. Annu Rev Psychol. 1996;47:273–305

12. Pronovost PJ, Bo-Linn GW. Preventing patient harms through systems of care. JAMA. 2012;308:769–770

© 2014 by the Association of American Medical Colleges

Login

Article Tools

Share