SELECTED RESEARCH AND INNOVATION ABSTRACTS
1 INTERPROFESSIONAL ERROR DISCLOSURE SIMULATION BENEFITS BOTH STUDENTS AND FACULTY
Carla Dyer, MD1, Gretchen Gregory, MS, RN2, Erica Ottis3, Dena Higbee, MS1, Les Hall, MD1
1UNIVERSITY OF MISSOURI SCHOOL OF MEDICINE, COLUMBIA MO, USA 2UNIVERSITY OF MISSOURI SINCLAIR SCHOOL OF NURSING, COLUMBIA MO, USA 3UNIVERSITY OF MISSOURI-KANSAS CITY SCHOOL OF PHARMACY, KANSAS CITY MO, USA
Introduction: Error disclosure is a challenging part of clinical practice. An interprofessional error disclosure program from a collaborating institution was adapted to incorporate standardized family members. The goal was to demonstrate its adoptability and effectiveness by evaluating improvements in self-reported student knowledge and comfort while promoting interprofessional collaboration. The exercise also provided faculty development.
Methods: 183 health professional students (second year medical, seventh semester baccalaureate nursing, and third year pharmD) participated in a three-hour error disclosure course. Following a lecture on disclosure techniques, interprofessional faculty facilitated disclosure training for groups of 10 students. After group planning, teams of 3–4 students disclosed the error to a standardized family member, who reacted uniquely to each group. Advance standard patient training included practice with a range of emotional responses and active debriefing techniques. Peers, family member, and faculty provided feedback following each disclosure encounter; interactive discussion followed. Students completed 10 IRB approved questions, using a 5-point Likert scale (5=strongly agree), regarding knowledge, attitudes and comfort with disclosure pre/post encounter. Students and faculty completed qualitative and quantitative evaluations. Analysis of paired pre/post survey responses is underway.
Results: Post survey completion rate was 96–100% among groups of students. Mean confidence of students in disclosing errors increased from 3.71 to 4.48 (pharmacy), 3.28 to 4.22 (medicine) and 3.57 to 4.21 (nursing) following the event. Mean responses of knowledge regarding error disclosure techniques increased after the event from 2.90 to 4.43 (pharmacy), 2.38 to 3.95 (medical) and 3.12 to 4.19 (nursing). 95–100% of students and 100% of faculty agreed or strongly agreed that having interprofessional learners made this a more valuable learning experience. The majority of faculty/staff facilitators stated they would apply something learned to their practices.
Conclusions: After minor modifications, this disclosure exercise was adopted from one institution to another to include a new valuable role for standardized family members. Results demonstrate significant improvement in self-reported knowledge of disclosure and comfort with the skill by all professional groups. Students also reported improved understanding of interprofessional roles and teamwork. Added benefits included the faculty development that occurred as a result.
Acknowledgment: This study was originally designed by the University of Washington and was adapted at the University of Missouri to use standardized patients instead of faculty members. The studies at both universities were funded in part by grants from the Josiah Macy Foundation.
2 AN INTERPROFESSIONAL ROUNDING SIMULATION WITH STUDENTS OF MEDICINE, NURSING, AND PHARMACY
Lee Ann Miller, EdD1, David Wilks, MD2, Jay Martello, PharmD3, Charles Ponte, PharmD3, Jason Oreskovich, DO2, Daniel Summers, RN, BSN, CEN, EMT-P1, Gail Van Voorhis, MSN, RNC4, Rebecca Kromar, DNP, MBA, RN4, Jon Wietholter, PharmD3, Lena Maynor, PharmD3, and Gina Baugh, PharmD3
1WEST VIRGINIA SIMULATION TRAINING AND EDUCATION FOR PATIENT SAFETY (WV STEPS) 2WEST VIRGINIA UNIVERSITY SCHOOL OF MEDICINE 3WEST VIRGINIA UNIVERSITY SCHOOL OF PHARMACY 4WEST VIRGINIA UNIVERSITY SCHOOL OF NURSING
Introduction: Interprofessional education has become increasingly important to the development of patient-centered practitioners in health care. Interprofessional education occurs when 2 or more professions learn with, from, and about each other to improve collaboration and the quality of care.1 Such collaboration has shown to improve student attitude, communication, and clinical confidence.2 The inclusion of both high-fidelity simulators and trained standardized patients allowed our students to experience a diverse set of patient medical conditions.
Methods: Faculty members from Pharmacy, Nursing, and Medicine constructed an authentic rounding experience for small groups. Students were divided into working groups consisting of 1 medical student, 2 nursing students, and 2 pharmacy students. Two cases were presented; an SP who had a bee sting with cellulitis and a manikin with an ischemic stroke requiring ventilation. Both patients had comorbidities to enrich the learning experience. Students were provided with a chart and worksheet for each patient. Pre- and post-tests of knowledge with questions drawn from all 3 disciplines were administered. SPs provided verbal feedback and a group performance checklist. Encounters were video recorded for subsequent review and evaluation. After both encounters were complete, small groups met to develop an integrated care plan. Completion of collaboration and attitudinal surveys concluded the event.
Results: Preliminary outcomes suggest that students from different health care fields of study are overall satisfied with the collaboration that occurred during the simulation activity. Further analysis will determine what factors affect satisfaction. Students also made improvement in their pre/post scores of basic knowledge (from 57% to 80% average). Sixty two percent of the students found the Interprofessional experience to be valuable overall yet many found it somewhat overwhelming. SP observations were positive.
Conclusions: Results of this project suggest students who learn via simulation in interprofessional groups are challenged yet quite positive about the experience. Gains in basic knowledge scores suggest there is sharing of information in the small groups. Our institution is encouraged to further develop Interprofessional activities in health care education.
1. Center for Advancement of Interprofessional Education (CAIPE). http://www.caipe.org.uk/about-us/defining-ipe. Accessed November 28, 2012.
Shrader S, McRaeL, King WM, and Kern D: A simulated interprofessional rounding experience in a clinical assessment course. Am J Pharm Educ 2011; 75(4): 1–8.
3 USE OF STANDARDIZED PATIENTS TO EVALUATE MEDICAL STUDENT CLINICAL SKILLS EVALUATION POST ENCOUNTER NOTES
Rhonda A Sparks, MD, Michelle D Wallace, BS, and Britta M Thompson, Ph.D.
CLINICAL SKILLS EDUCATION AND TESTING CENTER, UNIVERSITY OF OKLAHOMA COLLEGE OF MEDICINE, OKLAHOMA CITY OK, USA
Introduction: Many institutions utilize a multiple station Clinical Skills Evaluation (CSE) to assess clinical performance during medical school.1,a Students typically produce a standard SOAP note after simulated outpatient encounters and these notes are then evaluated by clinical faculty members in order to provide accurate student evaluation and feedback. The large number of notes produced by this evaluation can be a significant burden for clinical faculty time and usually demands quick turn-around. To facilitate prompt evaluation of notes we employed standardized patients with clinical experience or background to evaluate student notes using a standard checklist format.
Project Description: Fourth-year medical students (n=161) completed an 8 station CSE, complete with a standard SOAP note. Prior to the CSE, a group of 8 faculty members determined critical history items, necessary physical exam components, appropriate diagnoses for each differential, and the appropriate items for the patient work-up. Using this information, checklists were developed for grading note components. All notes were graded by 4 standardized patients chosen based on their previous experience in the health professions. SPs were trained and graded 2 cases; a clinician was available for questions. A clinical faculty member graded a random sample of notes (n=20) for each case (total=180) to validate the evaluations. We analyzed agreement between SP ratings and faculty ratings using percent agreement and kappa.
Outcomes: Analysis indicated almost perfect agreement overall between SP note graders and the clinical faculty member (Agreement=96%, κ=.93). Analysis of each case indicated that agreement ranged from 90.1% to 99%, with kappa ranging from 0.77–0.99. We analyzed each of the note components and noted that agreement was 97%, 97%, 93% and 97% while kappa was 0.93, 0.93, 0.87 and 0.94 for history, physical, diagnosis, and workup, respectively.
Conclusions/Discussion: Standardized patients with previous experience within a healthcare field can accurately evaluate medical student CSE SOAP notes using a checklist carefully developed by clinical faculty. This will allow more timely, efficient and cost effective evaluation of student SOAP notes.
aCorbett EC, Jr. and Whitcomb M: The AAMC Project on the Clinical Education Available at https://www.aamc.org/download/68526/data/clinicalskillscorbett.pdf.
1. BarzanskyB and EtzelS: Medical schools in the United States, 2009–2010. JAMA 2010;304(11):1254.
4 ADEQUATE REPRESENTATION OF SOCIO-CULTURAL ISSUES IN OUR STANDARDIZED PATIENT SCENARIOS?
Karen Szauter, MD1, Valerie Fulmer, BS2, Dehra Glueck, MD3
1THE UNIVERSITY OF TEXAS MEDICAL BRANCH, GALVESTON TX, USA 2UNIVERSITY OF PITTSBURGH SCHOOL OF MEDICINE, PITTSBURGH PA, USA 3WASHINGTON UNIVERSITY SCHOOL OF MEDICINE, ST. LOUIS MO, USA
Introduction: Healthcare workers interact, and must be prepared to engage, with people from diverse backgrounds. A foundational understanding of demographic, social, cultural, racial and ethnic influences on health and disease is therefore essential. Additionally, awareness of personal socio-cultural bias is critical to ensure optimal patient care.1,2 Standardized patient [SP] experiences provide important opportunities for students’ clinical skill development and provide an ideal educational environment for personal reflection following an encounter. We performed this study to evaluate the representation of demographic, social and cultural variables in SP scenarios used in medical education.
Methods: SP training materials from three universities were reviewed. In addition to the learning objectives and presenting problem, we extracted case details including patient age, sex, ethnicity, educational background, employment, sexual orientation, life details, and substance use. A common data collection form was used by all investigators and data were entered into a master database. Descriptive analysis was performed across cases and by school.
Results: Information from 228 SP scenarios was evaluated. Case use included teaching 32.4%, formative assessment 18.1%, and graded exercises 49.5%. More than half were acute presenting problems, less than 10% focused on behavioral issues. Patient ages ranged from newborn to 83 years; sex designations were male 22.9%, female 38.8%, either 38.3%. Designation of a specific SP race/ethnicity was rare, and when non-Caucasian, ethnicity was directly relevant to the case content (e.g.: translator, beliefs about healthcare). Level of education typically exceeded high school (68.2%); only 12.2% of scenarios designated patients as unemployed. In most scenarios patients were married (54.2%) and living with their spouse. Only 2% of cases included bi- or homosexual patients; one school included only heterosexual patients. Patients’ religious affiliation and healthcare insurance information was routinely included by only one school. Current or past tobacco use (∼1/2 of cases) focused on cigarettes. Current use of alcohol was common across cases; use of illicit drugs was rarely included.
Conclusions: Our study has identified opportunities to enrich SP scenarios to better represent the diverse populations that we serve.3,4 We also have recognized a need for comprehensive review of SP case libraries to ensure broad inclusion of socio-cultural content.
1. Teal C, Gill A, Green AR, and Crandall S: Helping medical learners recognise and manage unconscious bias toward certain patient groups. Med Educ 2012;46(1):80–88.
2. Kumagai A and Lyspon M: Beyond Cultural Competence: Critical Consciousness, Social Justice and Multicultural Education. Acad Med 2009; 84(6): 782–787.
3. Obedin-Maliver J, Goldsmith E, Stewart L etal: Lesbian, Gay, Bisexual and Transgender-Related Content in Undergraduate Medical Education. JAMA 2011; 306(9): 971–977.
4. Turbes S, Krebs E, and Axtell S: The Hidden Curriculum in Multicultural Medical Education: The Role of Case Examples. Acad Med 2002; 77(3): 209– 216.
5 COMMUNICATING THE DIAGNOSES: ARE WE ALL ON THE SAME PAGE?
Karen Szauter, MD, Lori Kusnerik, AAS, Anita Mercado, MD, Michael Ainsworth, MD
THE UNIVERSITY OF TEXAS MEDICAL BRANCH, GALVESTON TX, USA
Introduction: Synthesizing patient information and communicating diagnostic impressions are complex skills. What patients are told, what they comprehend, and what is documented in the medical record ideally should align. We studied the association of diagnostic information between what is said (by students), what is comprehended (by patients) and what is written (by students) in patient notes.
Methods: We studied two scenarios from our 2012 Clinic Skills Assessment (CSA): a man with a syncopal episode (SYNC) and a woman with an abnormal liver profile (LAB). Four standardized patients (SPs) were trained to portray/score each case. Students were allotted 15-minutes for encounters and 10-minutes for post-encounter notes (including documentation of a prioritized differential diagnosis). Encounters were video-recorded. SPs were asked to document the content and clarity of diagnoses provided by students. Video-recorded encounters were transcribed and reviewed by two investigators to identify the diagnoses that had been communicated to the patients.
Three sources for diagnoses were compared: what students said (from transcribed encounters), what patients heard (SP recall/documentation) and what students wrote (patient notes). Descriptive analysis was performed. We compared diagnoses recalled by SPs to those documented in the post-encounter note. Where mismatches occurred, we reviewed transcribed information to determine whether the student had truly discussed the diagnosis during the encounter.
Results: 281 senior students participated in the CSA (218 medical, 63 physician assistant). SPs noted “no diagnosis provided” in 23.6% (SYNC) and 29.0% (LAB) of the encounters. A clear, likely diagnosis was given in 34.4% (SYNC) and 25.8% (LAB); the remaining encounters included multiple, potential diagnoses. Comparison of “heard” to “written” information revealed notable content variation. In over half of encounters, diagnoses written in notes were not identified by SPs (written, not heard). The majority of these were not discussed during the encounter (written, not said).
Conclusions: Providing diagnostic impressions requires content knowledge and communication skills. This work demonstrated that SPs can accurately recall diagnoses that they were told, but diagnoses documented were often different. This difference requires further investigation as mismatches have important implications in actual patient care.1,2
1. McCarthy DM, Waite K, Curtis L, et al.: What did the doctor say? Health literacy and recall of medical instructions. Med Care 2012; 50(4) 277– 282.
2. Olson DP and Windish D: Communication discrepencies between physicians and hospitalized patients. Arch Intern Med 2010; 170(15): 1302– 1307.
6 USING STANDARDIZED PATIENTS TO PREPARE “SUPER USERS” FOR AN EMR ROLLOUT
Jeanette Wong, RN, MPA, Celeste Villanueva, CRNA, MS
HEALTH SCIENCES SIMULATION CENTER, SAMUEL MERRITT UNIVERSITY, OAKLAND CA, USA
Introduction: The rollout of an electronic medical record system is a significant event in a healthcare system. A strategy for success is trained staff ready to assist during the rollout period. In addition, providers struggle with maintaining patient focused care while documenting in an EMR. The traditional EMR preparatory training includes a combination of in classroom and on-line training modules. We included the use of standardized patients to add realism to the training and the opportunity to experience the potential obstacles in providing patient focused care and to develop strategies.
Methods: The Health Sciences Simulation Center (HSSC) at Samuel Merritt University (SMU), an affiliate of Sutter Health, was requested by Sutter Health East Bay to develop a simulation experience for the Super Users (healthcare providers who volunteered to go through extensive EMR training to assist their colleagues during implementation) to have a “hands-on” experience before the system goes “live.” The experience was designed to use standardized patients because of the realism the encounter would provide. The learner groups were interdisciplinary and included physicians, nurses, respiratory therapists, pharmacists and other allied health care professionals. The case was a patient with sepsis requiring admission orders, an initial nursing assessment, a respiratory treatment, and medication delivery. These interventions allowed for an interdisciplinary team to interact with the patient and chart in the EMR system. The encounter was videotaped and then debriefed with HSSC faculty, EMR expert trainers, and, of course, the standardized patient.
Results: More than 90 healthcare professionals participated in the EMR standardized patient training. The feedback was overwhelmingly positive and learners understood the importance of patient-focused care and how difficult it can be when working with an EMR system. The feedback provided the Super Users with strategies on how to maintain patient-focused care while working with the EMR; they were encouraged to share the strategies with their colleagues during the implementation phase.
Conclusions: The experience was a unique and effective method in preparing Super Users to be role models for colleagues on how to communicate and continue to provide patient-focused care while utilizing an EMR system.
7 FLYING HIGH: INTEGRATING HYBRID STANDARDIZED PATIENT SIMULATION MODALITIES IN TRAINING PROGRAMS FOR FLIGHT MEDICS AND OTHER CRITICAL CARE TRANSPORT SPECIALISTS
Jorge D Yarzebski1, BA, EMTP1, Wendy L Gammon1, MA, Med, Adam Darnobid2, MD, William Tollefsen2, MD, Angela Talbot2RN
1INTERPROFESSIONAL CENTER FOR EXPERIENTIAL LEARNING AND SIMULATION (ICELS), OFFICE OF CONTINUING MEDICAL EDUCATION, OFFICE OF MEDICAL EDUCATION, STANDARDIZED PATIENT PROGRAM, OFFICE OF EDUCATIONAL AFFAIRS, UNIVERSITY OF MASSACHUSETTS MEDICAL SCHOOL, WORCESTER MA, USA 2EMERGENCY MEDICINE, UMASS MEMORIAL HEALTHCARE, UMASS MEDICAL SCHOOL, WORCESTER MA, USA 3EMERGENCY MEDICINE, UMASS MEMORIAL EMS/LIFEFLIGHT, DEPARTMENT OF NURSING, UMASS MEMORIAL HEALTHCARE, WORCESTER MA, USA
Introduction: Standardized Patient (SP) modalities are not commonplace in allied health curricula. Graduate medical education including medical, nurse practitioner and physician assistant programs utilize SP’s to train and assess the clinical competency and the psycho-affective domain of their graduate students.1 Upon entering internship, graduate level students have amassed experience in interviewing patients, performing physical exams and mitigating crisis, which allied health personnel lack in traditional education. Our SP program is developing curricula to train pre-hospital emergency providers (PEPs) with SP OSCE based testing.
Project Description: A local critical care transport (CCT) team transitioned its flight crew configuration from physician/nurse to paramedic/nurse. The CCT team hired veteran ‘street/911’ paramedics with little to no critical care experience to fill the staffing gap. To prepare the paramedics for work in critical care transport (CCT), the Office of Continuing Medical Education (OCME) and the SP Program created a critical care transport specialist OSCE. A baseline OSCE focuses on communication and simple crisis mitigation based on CRM principles.2 A summative OSCE after a 4-month didactic/practical orientation tests the paramedic’s preparation to work in CCT. Performance is measured objectively using checklists and performance scales.3
Outcomes: The learners successfully completed the OSCEs, testing clinical competence, difficult patient hand-offs, professionalism, and discord among team members. Learners rated the OSCEs pre and post encounter with a Likert scale rating. Positive evaluations highlighted opportunities to learn and practice skills in a non-threatening and positive environment, formal assessment through collected data from OSCE checklists and structured debriefing through guided discussion.
Conclusions/Discussion: Integration of new hybrid simulation experiences into flight paramedic training curricula helps learners acquire more confidence, familiarity and competency in managing patient care and interpersonal and interprofessional behaviors in austere environments. The goal is to provide a safe environment for the paramedic, patient and flight crew through this experiential training program. To date there is little evidence of standardized patient use in pre-hospital medicine; moulaged confederates have been used in simulation based courses however trained professional SP’s have not been evaluated. The program will demonstrate this model is achievable and affordable to paramedic training institutions.
1. Newble D: Techniques for measuring clinical competence: objective structured clinical examinations. Med Educ 2004; 38: 199– 203.
2. Salas E, Burke CS, Bowers CA, and Wilson KA: Team training in the skies: does crew resource management training work? Hum Factors 2001; 43: 641– 674.
>3. Bartfay WJ, Rombough R, Howse E, and Leblanc R: Evaluation. The OSCE approach in nursing education. Can Nurse 2004; 100(3): 18– 23.
8 COMPARING THE VALIDITY OF CLINICALLY DISCRIMINATING VS TRADITIONAL “THOROUGHNESS” CHECKLISTS
Rachel Yudkowsky, MD, MHPE1, Yoon Soo Park, PhD1, Janet Riddle, MD1, Catherine Palladino2, and Georges Bordage, MD, PhD1
1DEPARTMENT OF MEDICAL EDUCATION, UNIVERSITY OF ILLINOIS AT CHICAGO COLLEGE OF MEDICINE, CHICAGO IL, USA 2UNIVERSITY OF ILLINOIS COLLEGE OF PHARMACY, CHICAGO IL, USA
Introduction: High-quality checklists are essential to the validity of performance tests. A previous study1 found that physical exam checklists that clinically discriminated between competing diagnoses provided more generalizable scores than thoroughness checklists. The purpose of this study was to compare validity evidence for clinically discriminating checklists vs traditional thoroughness checklists, hypothesizing that validity evidence would favor clinically discriminating checklists.
Methods: Faculty developed six SP cases with case-specific history and physical exam checklists of about 20 items. Four clinician experts independently identified a subset of items that discriminated between the competing diagnoses of each case. All six cases were administered to fourth-year medical students during their summative Clinical Skills Exam (CSE). Half of the SPs for each case were trained to complete the traditional checklist, and half to complete the shorter, clinically discriminating checklist. Video review determined checklist accuracy and ensured traditional scores for all students. We compared validity evidence for CSE scores based on the traditional (“long”) checklist items to evidence for scores based on subset of clinically discriminating items only (“short” checklist).
Results: Validity evidence in favor of the clinically discriminating checklist included (1) response process, reflected in significantly higher SP checklist accuracy: kappa of 0.75 for the long checklist, 0.84 for the short checklist, p<.05, and (2) internal structure, as indicated by better item discrimination (0.28 long, 0.42 short, p<.001), internal consistency reliability (.80 long, .92 short), standard error of measurement (z-score 8.87 long, 8.05 short), and Generalizability (G=.504 long, G=.533 short). There were no significant differences overall in relevance ratings, difficulty, or cut scores of short vs long checklist items.
Conclusions: Limiting checklist items to those that impact the diagnostic decision provided improved accuracy and psychometric indices. “Thoroughness” items that are performed without thinking do not reflect students’ clinical reasoning ability, and may contribute “noise” or construct-irrelevant variance to the score.
Acknowledgment: This study was funded in part by a grant from the National Board of Medical Examiners, Edward J Stemmler MD Medical Education Research Fund grant. The project does not necessarily reflect NBME policy, and NBME support provides no official endorsement.
1. Yudkowsky R, Lowenstein T, Riddle J, Otaki J, Nishigori RH, and Bordage G: A Hypothesis-Driven Physical Exam for Medical Students: Initial Validity Evidence. Med Educ 2009; 43: 729–740.