See Editorial, p 1194
Communication is important for relationships with patients.1 In 2018, the American Board of Anesthesiology started requiring physicians to pass an Objective Structured Clinical Examination to become board certified.2 As part of the examination, physicians must demonstrate proficiency in communication. Unfortunately, communication remains challenging to teach and evaluate.3
While assessments of care should include patient input, existing curricula and evaluation tools for nontechnical skills in anesthesiology do not include direct patient feedback.4,5 To address this issue, we previously designed a mailed ambulatory surgical patient survey to measure changes in residents’ communication skills after a simulation-based curriculum.6 Although we detected an overall improvement, the curriculum did not consider residents’ individual areas for improvement.
We therefore developed a simulation-based curriculum customized for each resident based on patient feedback. To collect feedback, we developed an online survey administered in the postanesthesia care unit. We hypothesized that residents’ communication skills, as measured by our survey, would improve after our curricular intervention.
This study received institutional review board approval for exempt status with a waiver of written informed consent by the Committee on Clinical Investigations at Beth Israel Deaconess Medical Center. This article adheres to the applicable Strengthening the Reporting of Observational Studies in Epidemiology guidelines.
In this prospective cohort study on Beth Israel Deaconess Medical Center anesthesia residents’ communication skills from August 2014 to July 2015, we designed a patient survey to assess residents’ communication skills before and after a customized simulation-based curriculum.
Participants included ambulatory surgical patients and anesthesia residents (postgraduate years 2–4; N = 54) at Beth Israel Deaconess Medical Center during the study. We excluded patients who met ≥1 of the following:
- Did not recall interacting with his/her resident before surgery;
- Was unavailable due to privacy curtains, being occupied with health professionals, contact precautions, and/or continuing fatigue, nausea, or inability to sufficiently attend to the survey;
- Did not speak English or was deaf with no interpreter present; and/or
- Had undergone a dilation and curettage, dilation and evacuation, or cataract surgery.
Based on the Four Habit Coding Scheme and our previous study, we developed a new patient survey to assess residents’ communication skills.6,7 This 10-question survey asked the patient to rate different aspects of his/her resident’s communication skills on a scale of 1 to 4, with 1 being low and 4 high. The new survey (Supplemental Digital Content, Appendix 1, http://links.lww.com/AA/C748) differed from the previous survey as follows:
- Online design: The new survey, optimized for use on an Apple iPad (Apple Inc, Cupertino, CA), was online and housed on a Bluehost server (Bluehost Inc, Orem, UT).
- Resident pictures: To help patient recall, the survey included headshots of the residents in operating room attire they wear when interacting with patients.
- Sliding rating scale: Instead of discrete answer choices, the new survey had a sliding scale to allow for continuous ratings rounded to the nearest hundredth. The numerical value of the rating was visible under the scale.
- Option for “Not applicable”: The new survey included a checkbox for “Not applicable” if a patient believed a question was not applicable.
A research assistant (B.L.) introduced the study and administered the survey to patients on an iPad in the postanesthesia care unit. Patients were invited to voluntarily complete the survey. To ensure the patient evaluated the correct resident, the research assistant identified the resident involved in the patient’s care based on the operating room schedule and showed the patient the resident’s picture on the iPad for confirmation before offering the survey for completion.
We designed a curriculum to improve ratings on the survey. In December 2014, 1 investigator (J.D.M.) met with each resident who had survey data from August 2014 to December 2014 to share his/her survey results. Using the data, we identified up to 3 lowest scoring questions for each resident. For each question identified, 1 investigator (J.D.M.) wrote a corresponding 2-part reflective question (Table). We assigned each resident up to 3 reflective questions corresponding to his/her lowest scoring questions. Residents with fewer than 3 lowest scoring questions were also assigned the reflective question(s) corresponding to the group’s lowest scoring question(s) for a total of 3 reflective questions to ensure curricular consistency. Based on the areas assessed on the survey, 2 investigators (J.D.M. and C.K.) designed 2 simulation scenarios depicting positive preoperative and postoperative interviews between a resident and a patient.
We implemented the simulation sessions on 2 days (1 per day) during regularly scheduled didactic sessions. For each session, after viewing the simulation, each resident completed his/her assigned reflective questions, which were intended to encourage residents to think more about the areas in which they could improve. Residents unable to attend the live sessions were allowed to complete an online module in which they viewed each recorded simulation and responded to their assigned reflective questions. All residents (N = 54) viewed and completed the reflective questions for at least 1 simulation in person or online by mid-April 2015. Forty-three residents viewed and completed the questions for both simulations.
Data analysis was performed with Stata Special Edition 13.1 (StataCorp LP, College Station, TX); P < .05 was considered significant. Excluding blank surveys, we did the following for each survey:
- We coded the time period as “preintervention” (before the resident completed his/her first simulation) or “postintervention” (if the resident completed both simulations, after he/she completed them; if the resident completed only one, after he/she completed it).
- We calculated an “overall rating” by averaging the individual questions’ ratings.
We used the Hodges–Lehmann 2-sample aligned rank-sum test with period as the grouping variable, resident as the stratifying variable, and median as the measure of location with which alignment is performed to compare pre- and postintervention overall ratings.8 Because each resident had multiple surveys, we calculated each resident’s average overall rating in each period and reported overall ratings as the median (interquartile range) of residents’ average overall ratings in each period.
To support our primary results and to investigate whether postgraduate year level was related to overall ratings, we performed a secondary generalized estimating equations analysis. For this analysis, we calculated the first quartile of the overall ratings of all surveys. We coded each survey as “1” or “0” where 1 = the overall rating was above the first quartile and 0 = the overall rating was equal to or below the first quartile. We then used generalized estimating equations to fit a model for the proportion of surveys with overall ratings above the first quartile, taking into account resident variability, with time period and postgraduate year level as the independent variables. We used a binomial distribution, identity link function, and exchangeable correlation structure.
Of the 2911 patients cared for by anesthesia residents during the study, 52% were excluded. One thousand one hundred sixty-two of the remaining patients (83%) responded to the survey. The average number of surveys per resident was 10.7 ± 11.06 in the preintervention period and 9.4 ± 6.31 in the postintervention period. We collected data on all 54 residents; 41 had surveys in both periods. Overall ratings differed significantly between periods (preintervention: 3.86 [3.76–3.94], postintervention: 3.91 [3.84–3.95]; P = .025). The biweekly averages of residents’ overall ratings are depicted in the Figure.
The first quartile of all the overall ratings was 3.812. Based on our generalized estimating equations analysis, there was a significant difference in the proportion of surveys with overall ratings above the first quartile across time period (preintervention: 419/579 [72.37%], postintervention: 397/509 [78%]; P = .03) but not across postgraduate year level (postgraduate year 2: 516/686 [75.22%], postgraduate year 3: 135/184 [73.37%], postgraduate year 4: 165/218 [75.69%]; P = .739).
Residents received higher ratings from patients after the curriculum. While the changes are numerically small, any detectable positive change in patient satisfaction has value. Although experience may improve communication skills, results from our secondary analysis suggest that more experienced senior residents were not performing better than less experienced novice residents. Studies show that experience alone does not improve communication skills.9–11 Consistent with our results, attending physicians who attended a course on patient-centered communication achieved higher scores than a control group on an outpatient survey on communication skills.12 Another study on barriers to teaching communication skills suggested using educational interventions tailored to learners’ needs.13 With a customized curriculum, our study is a step in that direction. However, with a small sample size, our study design was limited. We had no control group and could not conduct a segmented regression analysis. Our study was a before-and-after design in which the pre- and postintervention periods were completely separated in time. Because the observed association may have been explained by trends over time, a better analysis would have been a segmented regression in which the periods are compared on the slope (ie, the change over time).14 Further research should consider randomized, multi-institutional studies to increase the sample size and assess the intervention’s effectiveness and generalizability of the results. Future studies should also standardize the intervention (eg, have all residents attend the live sessions), investigate which parts of the intervention were most effective, explore other factors contributing to the detected changes, determine and adjust for confounding variables, and further validate the survey.
The authors would like to thank Ariel Mueller, MA, Ziyad Knio, BS, and Xinling (Claire) Xu, PhD, for their assistance in the data analysis; Amy Sullivan, EdD, Richard Schwartzstein, MD, and the Center for Education at Beth Israel Deaconess Medical Center for their support and guidance; the Carl J. Shapiro Institute for Education and Research at Beth Israel Deaconess Medical Center for their support of the pilot study; David Fobert, MLA, Michael McBride, RN, and Darren Tavernelli, RN, RRT, for their assistance in producing the simulation sessions; the Center for Anesthesia Research Excellence at Beth Israel Deaconess Medical Center for their support; and the Beth Israel Deaconess Medical Center Department of Anesthesia, Critical Care and Pain Medicine residents and leadership for their participation in and support of the study.
Name: John D. Mitchell, MD.
Contribution: This author helped conceptualize, design, and implement the study and prepare the manuscript.
Name: Cindy Ku, MD.
Contribution: This author helped conceptualize and design the study and review the manuscript.
Name: Brendan Lutz, BS.
Contribution: This author helped implement the study and prepare the manuscript.
Name: Sajid Shahul, MD, MPH.
Contribution: This author helped conceptualize and design the study, analyze the data, and review the manuscript.
Name: Vanessa Wong, BS.
Contribution: This author helped design and implement the study, analyze the data, and prepare the manuscript.
Name: Stephanie B. Jones, MD.
Contribution: This author helped design and implement the study and prepare the manuscript.
This manuscript was handled by: Edward C. Nemergut, MD.
1. Pichert JW, Miller CS, Hollo AH, Gauld-Jaeger J, Federspiel CF, Hickson GB. What health professionals can do to identify and resolve patient dissatisfaction. Jt Comm J Qual Improv. 1998;24:303–312.
3. Rhoton MF, Barnes A, Flashburg M, Ronai A, Springman S. Influence of anesthesiology residents’ noncognitive skills on the occurrence of critical incidents and the residents’ overall clinical performances. Acad Med. 1991;66:359–361.
4. Fletcher G, Flin R, McGeorge P, Glavin R, Maran N, Patey R. Anaesthetists’ Non-Technical Skills (ANTS): evaluation of a behavioural marker system. Br J Anaesth. 2003;90:580–588.
5. Flin R, Patey R, Glavin R, Maran N. Anaesthetists’ non-technical skills. Br J Anaesth. 2010;105:38–44.
6. Mitchell JD, Ku C, Wong V, et al. The impact of a resident communication skills curriculum on patients’ experiences of care. A A Case Rep. 2016;6:65–75.
7. Krupat E, Frankel R, Stein T, Irish J. The four habits coding scheme: validation of an instrument to assess clinicians’ communication behavior. Patient Educ Couns. 2006;62:38–45.
8. Linden A; ALIGNEDRANKS: Stata module to perform a two-sample aligned rank-sum (Hodges-Lehmann) test with exact statistics for small samples. Statistical Software Components 2014. Available at: https://ideas.repec.org/c/boc/bocode/s457871.html
. Accessed December 24, 2018.
9. Gauntlett R, Laws D. Communication skills in critical care. Contin Educ Anaesth Crit Care Pain. 2008;8:121–124.
10. Aspegren K, Lønberg-Madsen P. Which basic communication skills in medicine are learnt spontaneously and which need to be taught and trained? Med Teach. 2005;27:539–543.
11. Fallowfield L, Jenkins V, Farewell V, Saul J, Duffy A, Eves R. Efficacy of a Cancer Research UK communication skills training model for oncologists: a randomised controlled trial. Lancet. 2002;359:650–656.
12. Boissy A, Windover AK, Bokar D, et al. Communication skills training for physicians improves patient satisfaction. J Gen Intern Med. 2016;31:755–761.
13. Junod Perron N, Sommer J, Louis-Simonet M, Nendaz M. Teaching communication skills: beyond wishful thinking. Swiss Med Wkly. 2015;145:w14064.
14. Wagner AK, Soumerai SB, Zhang F, Ross-Degnan D. Segmented regression analysis of interrupted time series studies in medication use research. J Clin Pharm Ther. 2002;27:299–309.