Mitchell, Erica L. MD; Lee, Dae Y. MD; Arora, Sonal MBBS, PhD; Kenney-Moore, Pat MS, PA-C; Liem, Timothy K. MD; Landry, Gregory J. MD; Moneta, Gregory L. MD; Sevdalis, Nick PhD
The issue of patient safety is receiving increasing attention worldwide, and numerous studies have shown that approximately 10% of patients admitted to a hospital suffer an adverse event.1 Over half of these adverse events are associated with a surgical procedure and, importantly, most are preventable.2–4 Surgeons and surgical trainees therefore must review the literature on surgical complications and further develop their skills to improve patient care.5 The surgical morbidity and mortality conference (M&MC)—sometimes referred to as the cornerstone of surgical education—provides surgeons with an opportunity to confront medical errors, openly discuss adverse events, and learn from their mistakes and those of others.6,7 The M&MC is mandated by the Accreditation Council for Graduate Medical Education (ACGME)8 and complies with the Joint Commission requirement for ongoing professional practice evaluation.9
Although all academic surgery departments are required by the ACGME to hold a “weekly review of all complications and deaths,” the structure, content, and format of M&MCs vary widely by department. For example, there is no standardized presentation style or formal guideline for how to present surgical complications in a manner that maximizes the learning value of the M&MC. In addition, there are no robust measures or evaluations of what attendees learn from the presented cases and little data demonstrating the effectiveness of the M&MC as a learning or care improvement tool.10,11 We are particularly concerned that educators presume that attending an M&MC is sufficient for learning, given that the ACGME currently mandates that all educational activities provide measurable benefit in achieving core competencies.
In this study, we evaluated whether implementing a standardized format for M&MCs would lead to attendees’ enhanced understanding of the surgical complication and the adverse events leading to it. We introduced a collaborative communication tool to support this intervention and hypothesized that the introduction of a standardized format for M&MC presentations would improve presentation quality and increase the objective educational value of M&MCs.
Our study took place between September 2009 and January 2011 at the Oregon Health & Science University (OHSU). The OHSU Department of Surgery holds a weekly one-hour conference attended by faculty from general surgery, minimally invasive surgery, trauma, critical care, emergency general surgery, surgical oncology, transplant, cardiothoracic, urology, plastics, pediatric, and vascular surgery (about 10–30 attendees total per conference). Attendance is mandatory for surgical residents and medical students rotating through the main campus services. Each week, three surgical services present at the M&MC, and responsibility for presenting rotates through the services. Each service selects one recent complication for its educational value to discuss in detail. The service’s most senior resident presents the complication at the M&MC. Each presenter has 15 minutes for a PowerPoint case presentation and 5 minutes for questions and answers.
All faculty, residents, and students attending a surgical M&MC during our study period were eligible to participate. We divided conference attendees into two groups—presenters (senior residents who presented the complication at the M&MC) and learners (conference attendees). Participation in our study was optional. For presenters, participation required the presentation of the complication using our new standardized format. For learners, participation required the completion of a multiple-choice questionnaire during the M&MC. The OHSU institutional review board approved our study, and the OHSU Department of Surgery sanctioned it.
Our study had two phases:
* Phase 1: developing a standardized, evidence-based format for M&MC presentations
* Phase 2: conducting a prospective observational pre- and postintervention study to evaluate the effect of the standardized presentation format developed during Phase 1 on educational outcomes
We first conducted an extensive review of the literature published from 1980 through 2009 to identify current best practices regarding M&MC presentations and their effect on educational outcomes. We searched PubMed, MEDLINE, Embase, and PsycINFO using combined MeSH terms: “morbidity and mortality,” “conference,” “presentation,” and “surg*.” From this literature review, we identified key elements of successful M&MC presentations.
Next, an international panel of nine experts from the United States and United Kingdom with backgrounds in surgery, patient safety, education, and psychology reviewed the findings from our literature review. Using their feedback, we then integrated the key elements we identified during our literature review into a standardized presentation format, using an SBAR (Situation, Background, Assessment, Recommendations) framework.12 The SBAR framework was initially developed for high-risk industries to facilitate consistent communication, and it has been shown to improve patient safety and educational outcomes.12 We adapted the SBAR framework for M&MC presentations using the following modifications: Situation included the complication, Background included clinical information pertinent to the subsequent adverse event, Assessment included an analysis of the complication and a delineation of the root cause of the complication, and Recommendations included a discussion of the literature relevant to the cause of the complication and a guide for future care to potentially prevent similar adverse outcomes.
We conducted Phase 2 in two stages over an eight-month period: preintervention (12 weekly conferences, baseline) and postintervention (12 weekly conferences).
During this stage, residents presented their surgical complications without the SBAR framework. Faculty evaluated residents’ presentation quality using a validated assessment tool.13 This tool assessed all components of the M&MC presentation. We trained faculty assessors to use the tool reliably to meet sufficient interrater reliability (intraclass correlation coefficients ≥ 0.70). Two faculty assessors attended each M&MC and provided blinded assessments of residents’ presentations. We checked interassessor agreement monthly to ensure consistency. We also asked learners to complete a questionnaire during each M&MC. These questionnaires included multiple-choice questions, with each question written to match the learning point(s) of each M&MC presentation.
During this stage, we asked residents assigned to present a surgical complication to use the standardized SBAR framework. We provided all presenters with formal guidelines (written instructions and a PowerPoint template) and one-on-one coaching with a research faculty surgeon. Faculty assessors and learners again assessed residents’ presentation quality and educational outcomes as they did during the preintervention stage.
We assessed three outcome measures according to the Kirkpatrick model14 for the evaluation of complex organizational interventions: user satisfaction, presentation quality, and educational outcomes.
We asked presenters to share their feedback on the intervention (the SBAR presentation framework) in terms of its content, delivery, and their overall satisfaction with it, using astructured survey (five-point Likert scales and free-text boxes).
We used a previously validated observational assessment tool to score each presentation.12 We used this tool to systematically assess how presenters reported surgical morbidity and mortality. The tool provides a concise and structured format for formative feedback and uses a five-point Likert scale to evaluate each component of the presentation with descriptive behavioral anchors. We did not provide feedback to presenters during the study period. Overall presentation scores ranged from a minimum score of 15 to a maximum score of 75, with higher scores indicating higher-quality presentations.
We measured educational outcomes using a multiple-choice questionnaire. At the time of each presentation, we asked learners to complete anonymously a questionnaire with a multiple-choice question on the focal learning point of each M&MC presentation (see Appendix 1 for a sample). Two board-certified general surgeons trained in multiple-choice-question-writing techniques wrote our questionnaire according to National Board of Medical Examiners (NBME) guidelines. An expert test item developer trained by the NBME reviewed each multiple-choice question to ensure reliability. Throughout the study, we standardized the multiple-choice question format according to NBME guidelines so that each multiple-choice question included a stem (test question) with one single best answer and four plausible distracters.
We took care to ensure that the difficulty of the multiple-choice questions before the intervention matched the difficulty of those after the intervention, such that we could not attribute any performance improvements after the intervention to easier multiple-choice questions. To achieve this balance, we rated all multiple-choice questions for degree of difficulty. Two faculty general surgeons blinded to the multiple-choice question distribution pre versus post intervention rated each question’s degree of difficulty using a five-point Likert scale: 1 = “too easy” (can be answered without further information or knowledge recall items), 3 = “appropriate American Board of Surgery–level question” (requires comprehension and application of knowledge), 5 = “too difficult” (topic is obscure or superfluous; topic is not likely to be encountered in practice).
We carried out our statistical analyses using SPSS version 19.0 (SPSS, Inc., Somers, New York). We used nonparametric statistical tests (Mann–Whitney tests) to analyze feedback on the intervention and quality of the presentations pre versus post intervention. We used a nonparametric test of independent proportions to analyze educational outcomes measures. We then analyzed the results of our questionnaires (using Mann–Whitney tests) by level: junior learners (medical students andpostgraduate year [PGY]1–PGY2 residents) versus senior learners (PGY3 or higher and faculty). We considered P < .05 to be statistically significant.
From our literature review, we identified areas for improvement in the M&MC process (see List 1). We incorporated these areas into the standardized SBAR framework (see Table 1). The resulting systematic, structured format for presenting surgical complications included the complication and a description of the clinical information pertinent to the subsequent adverse event, identified the root system causes of the adverse event, discussed the literature relevant to causality, and provided recommendations (based on the best medical evidence available) to prevent similar adverse events from occurring in the future.
All 12 surgical divisions were represented in our study. Presenters (PGY3–8) included 24 surgical residents during the preintervention stage and 19 during the postintervention stage. Learners included 42 surgical faculty, 107 residents, and 48 medical students. We estimate that these learners represent about 80% of all M&MC attendees, although we did not collect data to compare participants and nonparticipants.
During the preintervention stage, presenters discussed 36 cases, which matched the 36 corresponding multiple-choice questions. During the postintervention phase, presenters discussed 30 cases, which matched the 30 corresponding multiple-choice questions.
Presenters’ quantitative evaluations of the intervention were very positive, with all ratings above the scale midpoint (3). In their free-text comments, presenters universally thought the standardized M&MC presentation framework was simple to implement, did not add to their preparation time, and provided a helpful guide for structuring presentations. Learners preferred the standardized format to “no” format.
Presentation quality, that is, presentation quality as rated by the faculty assessors, improved significantly after the intervention (see Table 2). We noted improvements in overall, Background, Assessment, and Recommendations scores. We noted no improvement in Situation or literature review scores.
We collected a total of 1,247 multiple-choice question responses (829 pre- and 418 postintervention) from learners. Of these, faculty completed 203 responses (16%), residents 743 (60%), and medical students 301 (24%). All learners’ performances improved significantly after the intervention (see Table 3). However, senior learners (senior residents and attendings) performed betterthan junior learners (medical students and junior residents) both before and after the intervention (see Table 3). The magnitude of the difference, however, was reduced after the intervention, although it remained significant.
The degree of difficulty for the multiple-choice questions matched before and after the intervention. The mean degree of difficulty measured 3.11 before the intervention (standard deviation = 0.84) and 3.14 after the intervention (standard deviation = 0.77) (P = .75).
To provide high-quality patient care, physicians must continuously engage in the objective review of adverse events. M&MCs can play a pivotal role in this process as well as in ensuring patient safety and quality care if they engage audiences in clinical learning that leads to systematic process change. The surgical M&MC is a forum that provides surgeons an opportunity to review and confidentially discuss medical errors and adverse events. Although many consider the M&MC to be the cornerstone of surgical education, no one has objectively evaluated the explicit learning function of the M&MC or standardized the conference process. Consequently, an ad hoc approach to the M&MC format has evolved without formal evaluation of its educational value.
We conducted our study to fill this gap in the literature. We designed our study to determine whether implementing a standardized format for case presentations during M&MCs improved presenters’ presentation skills and attendees’ learning. We first developed a comprehensive, evidence-based, structured format for M&MC presentations on the basis of the existing surgical literature and the SBAR framework.10,11,15–17 Into this structured format, we incorporated other elements that we identified during our literature review to be critical to an effective M&MC presentation. This new standardized format offered presenters a clear and concise organization for including clinical information, such as the presentation of facts, a summary of the literature, and a discussion of how medical evidence supports best clinical practices. Our postintervention data indicate that the quality of the M&MC presentations improved after we implemented our adapted SBAR framework. Importantly, this format resulted in the clearer delivery of the critical learning point(s) of the presentations.
We noted significant improvements in presentation quality for three sections—Background information, Assessment and root cause analysis, and Recommendations for preventing future complications. The improvement in Recommendations is of particular importance because providing clear recommendations for preventing future errors is critical both in the maturation process of all trainees and to providing safer care to future patients. We did not find improvement in the literature review section, which may reflect our assessors’ bias as faculty surgeons, who generally have a better grasp of the pertinent literature than trainees. This finding suggests that educators should focus on improving discussions of the pertinent evidence base in relation to the adverse event in future efforts to enhance M&MCs.
Educational outcomes also improved significantly after the intervention. We found that knowledge acquisition improved for all learners, including for experienced attending-level surgeons. This improvement was not confounded by the degree of difficulty of the multiple-choice questions, as we ensured that both pre- and postintervention questions had a similar degree of difficulty. Interestingly, we found that the performance of the junior learners after the intervention matched that of the senior learners before the intervention. This finding suggests that our intervention boosted the knowledge of the junior learners to that of their senior peers before the intervention. In other words, the standardized presentation format improved junior learners’ understanding of the presenter’s recommendations. In addition, the most pronounced improvements in presentation quality were in the Assessment and Recommendations sections. We believe that these two sections improved concomitantly because as the presenters’ ability to communicate critical material improved, so, too, did learners’ understanding of the case presentations’ learning points.
Our study has several limitations. First, presenters were aware that others were evaluating their presentations both before and after the intervention. Thus, improvements in their performance may be attributable to a Hawthorne effect. However, presenters’ preintervention performance did not improve over the course of the study, and their improvement was only significant after the intervention, leading us to believe that their knowledge of the evaluations had little effect on their performance. Next, our study was a relatively small, single-institution study, which could have resulted in faculty assessor observer bias. We tried to minimize this bias by training the assessors12 and systematically checking interassessor reliability during the study. Our conclusions will be more generalizable when others test our results on a larger scale, at additional institutions, and with external assessors. Further, although the improvements we found were statistically significant, they were rather small in absolute terms. As statistical significance does not equate to educational significance, further evaluation is needed of the impact of our intervention on the translation of learning to practice. Finally, we measured educational outcomes only at the time of the M&MC presentation. Measuring the long-term effects of our intervention is needed to establish whether presenters retain those skills they learned and learners retain the knowledge they gained.
Our findings from this study also carry implications for the larger academic medicine community. The M&MC represents an authentic, relevant, and contemporary workplace-based learning opportunity that offers the potential for peer learning and reflection on surgical care provision, such that medical errors and omissions can be identified and dealt with in a manner that prevents their reoccurrence and improves patient care. We developed and evaluated an evidence-based, practical, and inexpensive tool that enhances the educational outcomes of such conferences. Our standardized SBAR framework provides a simple, easily implementable set of guidelines for inexperienced trainees on how to give an effective educational presentation to both junior and senior surgeons. This simple intervention can have a positive impact on practice-based education without overstretching hospital budgets, which is especially important today in a time of financial austerity for many institutions. For those programs without access to NBME expertise, the elaborate evaluation process we completed may not be feasible or necessary for implementing our standardized MC&C format. Finally, others can use the tool we developed to assess presentation quality to provide residents with formative feedback, and it can serve as a focus for presentation skills debriefing. Educators then can use the recently validated Objective Structured Assessment of Debriefing tool18 to evaluate these debriefings, linking presentation quality to educational outcomes.
The M&MC provides an opportunity for educators to assess the knowledge, skills, and attitudes of trainees from novices to experts. As George Miller19 pointed out, there is “no single assessment method (that) can provide all the data required for the judgment of anything so complex as the delivery of professional services by a successful physician.” Our findings demonstrate that the M&MC can provide a forum for evaluating what trainees know and how they apply this knowledge to patient care. Although our tool does not directly evaluate residents’ clinical competence or performance, the M&MC itself provides the opportunity for educators to evaluate residents’ application and demonstration of their knowledge and ability to integrate that knowledge into their daily clinical practice. We hope that improvements in M&MC attendees’ discussions and analysis of surgical adverse events will lead to improvements in clinical performance, reductions in surgical complications, and enhancements to patient safety. Further longitudinal research should explore whether a standardized M&MC presentation format can enhance clinical outcomes by preventing the recurrence of adverse events.
The M&MC can play a pivotal role in educating residents and improving patient safety. Our standardization of the M&MC presentation format using the SBAR framework improved the quality of residents’ presentations and attendees’ educational outcomes. We recommend using such a standardized presentation format to enhance the educational value of the M&MC, with the ultimate goal of improving surgeons’ knowledge, skills, and patient care practices.
Acknowledgments: The authors thank John G. Hunter, MD, OHSU chair of surgery; Karen E. Deveney, MD, OHSU vice chair of education and program director of surgery; Karen Kwong, MD, associate program director of surgery; and all of the OHSU surgical residents who participated in this study.
Funding/Support: Drs. Sevdalis and Arora are affiliated with the Imperial Centre for Patient Safety and Service Quality, which is funded by the United Kingdom’s National Institute for Health Research.
Other disclosures: None.
Ethical approval: The OHSU institutional review board approved this study, and the OHSU Department of Surgery sanctioned it.
Previous presentations: The authors presented the findings from this study at the Association for Surgical Education and Association for Program Directors in Surgery, Surgical Education Week, Boston, Massachusetts, March 24, 2011.
1. Vincent C, Moorthy K, Sarker SK, Chang A, Darzi AW. Systems approaches to surgical quality and safety: From concept to measurement. Ann Surg. 2004;239:475–482
2. Leape LL, Brennan TA, Laird N, et al. The nature of adverse events in hospitalized patients. Results of the Harvard Medical Practice Study II. N Engl J Med. 1991;324:377–384
3. Thomas EJ, Studdert DM, Burstin HR, et al. Incidence and types of adverse events and negligent care in Utah and Colorado. Med Care. 2000;38:261–271
4. Gawande AA, Thomas EJ, Zinner MJ, Brennan TA. The incidence and nature of surgical adverse events in Colorado and Utah in 1992. Surgery. 1999;126:66–75
5. Arora S, Sevdalis N, Suliman I, Athanasiou T, Kneebone R, Darzi A. What makes a competent surgeon? Experts’ and trainees’ perceptions of the roles of a surgeon. Am J Surg. 2009;198:726–732
6. Gordon LA Gordon’s Guide to the Surgical Morbidity and Mortality Conference. 1994 Philadelphia, Pa Hanley & Belfus
7. Gordon LA. Can Cedars-Sinai’s “M+M Matrix” save surgical education? Bull Am Coll Surg. 2004;89:16–20
10. Gore DC. National survey of surgical morbidity and mortality conferences. Am J Surg. 2006;191:708–714
11. Harbison SP, Regehr G. Faculty and resident opinions regarding the role of morbidity and mortality conference. Am J Surg. 1999;177:136–139
13. Mitchell EL, Lee DY, Arora S, et al. SBAR M&M: A feasible, reliable, and valid tool to assess the quality of, surgical morbidity and mortality conference presentations. Am J Surg. 2012;203:26–31
14. Kirkpatrick DL, Kirkpatrick JD Evaluating Training Programs: The Four Levels. 2009 San Francisco, Calif Berrett-Koehler Publishers
15. Murayama KM, Derossis AM, DaRosa DA, Sherman HB, Fryer JP. A critical evaluation of the morbidity and mortality conference. Am J Surg. 2002;183:246–250
16. Risucci DA, Sullivan T, DiRusso S, Savino JA. Assessing educational validity of the morbidity and mortality conference: Apilot study. Curr Surg. 2003;60:204–209
17. Prince JM, Vallabhaneni R, Zenati MS, et al. Increased interactive format for morbidity & mortality conference improves educational value and enhances confidence. J Surg Educ. 2007;64:266–272
18. Arora S, Ahmed M, Paige J, et al. Objective structured assessment of debriefing: Bringing science to the art of debriefing in surgery. Ann Surg. 2012;256:982–988
19. Miller GE. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9 suppl):S63–S67
Appendix 1 Example of a Multiple-Choice Question and Likert Scale Evaluating the Educational Outcomes of a Surgical Morbidity and Mortality Conference Presentation, Oregon Health & Sciences University, 2009–2011
A 65-year-old male underwent an uncomplicated left carotid endarterectomy for asymptomatic >80% internal carotid stenosis. His postoperative course was unremarkable, and he was discharged to home on postoperative day one. Seven days later, the patient is brought to the emergency department for new-onset mild confusion and abnormal speech. His wife states that the patient has been complaining of severe headaches for several days. In the emergency department, his vital signs are as follows: HR 83, BP 160/92, RR 14, oxygen saturation 93% room air. On physical examination, he has no neurologic deficits, and his surgical wound is clean. A head CT scan reveals no intracranial pathology.
What is the most appropriate next step in the management of this patient?
A. Duplex ultrasonography of the carotid arteries
B. Cerebral angiography
C. Admit patient to ICU for blood pressure control
D. Admit patient to ICU for tPA administration
E. Reassure patient and his wife and schedule a clinic appointment within the week
Did you know the answer prior to the morbidity and mortality conference?
Yes No Cited Here...
Table. No title avai...Image Tools