Teaching and assessing medical students' communication skills is a crucial aspect of their curriculum. The Liaison Committee on Medical Education states that medical schools must provide students “specific instruction in communication skills as they relate to physician responsibilities, including communication with patients, families, colleagues and other health professionals.”1 Traditionally, communication skills are taught and evaluated by a variety of methods, including formal training sessions, modeling, and interacting with both actual and simulated patients.2 However, electronic mail (e-mail) communication with patients is becoming more common.3 A recent survey showed that 74% of adults would like to be able to communicate with their doctors via e-mail.4 Several studies have recently examined both physician and patient use5–10 and satisfaction with e-mail communication.5–7,9,11–14 Kleiner et al5 specifically looked at parent and physician attitudes toward e-mail in both general pediatric and subspecialty pediatric practices. They found that a majority of parents desired the availability of e-mail communication with their doctor's office.
While the ethical and medical benefits and costs of this trend are worthy of discussion,15 today's medical students will likely be corresponding electronically with patients during their careers. It is foreseeable that patient demand will increase physician utilization of e-mail as a means of communication. Because this mode of communication lacks face-to-face interaction, the tone can often be misinterpreted. In addition, there are concerns related to confidentiality. Despite the myriad of potentially contentious issues, this form of communication is increasing in clinical practice. However, formal teaching regarding e-mail communication and use of e-mail as a teaching tool is not a part of most medical school or residency curricula, though this may be changing.16–18
We felt that the increase in physician e-mail communication with patients in the clinical world made it ripe for exploration as a tool to evaluate medical student knowledge, professionalism, and communication skills. Using e-mail as a teaching and assessment tool is a novel concept in clinical medical education. Therefore, we created simulated patient e-mails that required a medical student response. We wanted to determine if it was feasible to teach medical students e-mail communication skills in an interactive, faculty-guided discussion session and use their responses as a means to assess their communication competence in conveying medical knowledge professionally.
This study was granted exemption status by the institutional review board at the University of Michigan. Four simulated e-mail cases were developed by members of the pediatric faculty who are actively involved in medical student education. A secure Web site was created to house these e-mails. In each of the four cases, the simulated parent is e-mailing his or her physician because he or she is confused about the medical care his or her child received during a recent interaction with the physician. The chief complaint in each e-mail differs, but each represents a diagnosis or dilemma commonly encountered by medical students over the course of the pediatric clerkship. Table 1 provides a summary of the e-mail scenarios. Grading rubrics used to formally assess student responses were also created by the faculty who developed the simulated e-mails. Standards for the rubric were agreed on by professional consensus (Appendix 1). To reach consensus, e-mail cases were piloted with students and then graded with the rubric. Faculty then discussed, as a group, differences in interpretation of the grading rubric. This process was repeated several times before finalizing the rubric. The rubric rates the students' abilities to demonstrate medical knowledge by how well they accomplished the following tasks in the course of their e-mail response to the parent: (1) explain standard of care for the scenario, (2) identify risks and benefits of various treatment decisions, and (3) describe appropriate next steps. The rubrics also assess professionalism and communication skills such as showing respect for the parent, validating parental concerns, and avoidance of medical jargon.
Implementation of the assessment tool
The pediatric clerkship is an eight-week course required during the third year of medical school. Our medical student academic year begins in May and concludes in April of the following year. We initiated this study in May 2008 and concluded it in February 2009. Because of curricular requirements that occur in the outpatient portion of the rotation, we felt it best that students complete this exercise in the inpatient portion of their rotation. Therefore, during the first week of a four-week inpatient rotation, the students were asked to reply to one of the e-mail scenarios. All students received the same e-mail scenario, which was randomly assigned, and had four days to respond to it. After the deadline to respond, students attended a group teaching session that reviewed principles important to e-mail communication. At the end of the session, the students used the grading rubric to informally evaluate their own e-mail correspondence in the context of the discussion they just participated in. The session concluded with a group discussion of how the students' e-mail responses, which they self-evaluated, fared in the context of the teaching just received on e-mail communication. No individualized feedback occurred. During the second week of the inpatient rotation, students had the opportunity, though they were not required, to respond to a second e-mail scenario and receive informal faculty feedback on their response to this e-mail. During the third week of their inpatient rotation, the students were presented with a final simulated e-mail scenario, which was a different scenario from the first week's e-mail case. This e-mail was graded by a pediatric faculty member and was a component of their final grade for the clerkship.
Eight faculty participated in the rating of first- and third-week e-mail responses by 96 students who rotated in pediatrics from May 2008 to February 2009. Seven of these faculty were pediatric hospitalists and one faculty member was the lead member of this study (J.C.). Every student e-mail response was rated by between three and five faculty.
Ratings were performed on the 9-item grading rubric instrument, with each item rated on a 3-point scale which was quantified as 0, 1, or 2 points each. These were summed to a total score (sum of all 9 items: possible range from 0 to 18), a medical knowledge score (sum of 4 items: possible range from 0 to 8), a communication score (sum of 3 items: possible range from 0 to 6), and a professionalism score (sum of 2 items: possible range from 0 to 4).
Of the eight raters who rated responses, three rated all students. Therefore, these three raters were used to estimate the interrater reliability of the instrument.
Scores for each response were computed by averaging total and subscores across all faculty raters for each response. To test for possible confounds due to student clinical experience, baseline (first response) scores were modeled by date using linear regression. Paired t tests were performed comparing students' score changes from their first to final responses for total score, knowledge subscore, communication subscore, and professionalism subscore. A comparison-wise type I error rate, appropriate for multiple testing, was set at α = .05, so a Bonferroni-adjusted α = .0125 was used for these four tests.
All analysis was performed using R version 2.9.1 (R: A Language and Environment for Statistical Computing, Vienna, Austria); reliability statistics were computed using the multilevel package version 2.3.
Of the 96 students in the study, 7 students either did not complete a third-week response or the response was unavailable electronically. Therefore, these ratings were excluded from this analysis. The remaining 89 students' first-week e-mail responses were rated by between 3 and 4 (mode of 3, average of 3.02) raters, and their final e-mail responses were rated by 3 or 4 (mode of 4, average of 3.83) raters.
Our results indicate that the subscores were fairly independent, given that correlations of subscores across all raters and responses were small. The magnitude of the correlations between knowledge and communication was r = 0.16, between knowledge and professionalism r = 0.2, and between communication and professionalism r = 0.3.
Intraclass correlations of all ratings of each essay were used to estimate interrater reliability of total score (0.85), knowledge subscore (0.89), professionalism subscore (0.79), and the communication subscore (0.66). Each subscore was averaged across rater for each student response for the performance improvement analyses.
Students' baseline performance did not improve as the academic year progressed. The linear effect of date on total score on the first response was near zero (baseline performance improved less than 0.002 score points per month) and nonsignificant (F(1,88) <0.001, NS).
Nineteen students (19.7%) completed a second (optional) e-mail response. A two-way mixed-model ANOVA comparing performance by time (first and third response, within-subject) and completion of the second e-mail response (completed or not, between-subject) found neither a main effect of completion of the second e-mail response nor an interaction. Therefore, these students did not perform better overall or on subscales than those who did not complete a second e-mail response.
Scores on the total and all subscales improved significantly from the first to the final testing. Students' total scores improved an average 2.42 points from mean = 14.36 (s = 1.92) to mean = 16.78 (s = 0.87) (t(88) = 11.16, P < .0001). Knowledge scores improved an average 1.49 points from mean = 5.75 (s = 1.65) to mean = 7.24 (s = 0.75) (t(88) = 7.71, P < .0001). Communication scores improved an average 0.42 points from mean = 5.23 (s = 0.75) to mean = 5.65 (s = 0.41) (t(88) = 4.74, P < .0001). Professionalism scores improved an average 0.51 points, from mean = 3.38 (s = 0.55) to mean = 3.89 (s = 0.26) (t(88) = 8, P < .0001).
Our teaching intervention regarding e-mail communication was effective in improving e-mail communication skills across all three categories: medical knowledge, communication, and professionalism. This effect was sustained as the pediatric clerkship progressed over the academic year. Because we observed no change in baseline performance across the academic year, we assume students are not naturally acquiring these communication skills through routine clerkship experiences. Therefore, this sustained improvement implies that a targeted intervention can be effective in advancing students' skills in this particular means of communication. Interestingly, although students were offered the chance to complete an optional e-mail response purely for formative feedback during their second week, approximately 80% chose not to complete this exercise. Discussion with faculty revealed that those students who chose to complete a second case were usually students who felt that they personally needed improvement after the initial teaching session.
We recognize that in the real clinical practice world, many physicians may not be comfortable communicating via e-mail with parents who are upset. This issue was addressed with the students in the guided discussion session conducted by the faculty. Our goal was to assess components of medical knowledge, communication, and professionalism in a unique, modern way; we feel this was accomplished without detriment to future student use of e-mails with patients. In fact, both the literature6 and our own student comments regarding appropriateness of e-mail responses to distressed parents helped form the content of the teaching intervention that the students receive on our clerkship. It would be possible to design other scenarios, with varying levels of urgency, to further elucidate core medical knowledge and communication skills. Such a design could be incorporated into an objective structured clinical examination format, broadening this exercise's use as an assessment tool.
Limitations to our study include that it was a single-institution study and that we did not have a control group. The faculty designed the grading rubric based on their own professional consensus. In addition, while interrater reliability of the grading rubric revealed good agreement overall, the area with the largest discrepancy was in communication and indicates some disagreement among faculty regarding what constitutes good communication. These standards may be the most subjective among practitioners as well, requiring further elaboration to determine if more objective measures could be instituted. Instituting faculty focus groups and having “real” parents suggest additional rating parameters that are meaningful to them could enrich this exercise. Patients have been found to have different perspectives on the professional roles of physicians.19 Therefore, additional studies are needed that examine patients' interpretations of e-mail communications as well as whether this curriculum enhances this particular skill in the long run. Finally, while this particular study concerned itself with the feasibility, scalability, and reliability of this assessment technique and found strong evidence for all three, we are currently developing studies to further validate the technique, by comparing student performance with their communication skills in other contexts and by comparing physicians' ratings of student performance with parents' and patients' ratings. Once the validity of this assessment is established, it will likely prove to be a valuable research tool for investigating interstudent differences due to gender, socioeconomic background, and choice of future specialization, among others.
As electronic communication with our patients is increasing, teaching this unique skill should be considered an essential component of the medical school communication curriculum and include elements such as reading literacy, appropriately conveying medical knowledge, medical-legal issues, and professionalism.20,21 It is feasible to institute simulated e-mails in a clinical clerkship setting because of its relatively low cost and low technological complexity. Moreover, a brief educational intervention was effective in improving student performance on e-mail communication in a simulated environment. Student responses to simulated e-mails were able to be used as part of formal student assessment. As our students become physicians in this ever-upgrading, technology-demanding world, these skills will become increasingly important.
The authors would like to thank Patricia Mullan, PhD, professor of medical education, for her kind and helpful review of this manuscript.
This study was deemed exempt by the University of Michigan Institutional Review Board.
Dr. Christner presented a brief summary of this work at the AMEE Ottawa Conference, Miami, Florida, May 2010.
2Aspegren K. BEME Guide No. 2: Teaching and learning communication skills in medicine—A review with quality grading of articles. Med Teach. 1999;21:563–570.
5Kleiner KD, Akers R, Burke BL, Werner EJ. Parent and physician attitudes regarding electronic communication in pediatric practices. Pediatrics. 2002;109:740–744.
6Gaster B, Knight CL, DeWitt DE, Sheffield JV, Assefi NP, Buchwald D. Physicians' use of and attitudes toward electronic mail for patient communication. J Gen Intern Med. 2003;18:385–389.
7Moyer CH, Stern DT, Dobias KS, Cox DT, Katz SJ. Bridging the electronic divide: Patient and provider perspectives on e-mail communication in primary care. Am J Manag Care. 2002;8:427–433.
8Brooks RG, Menachemi N. Physicians' use of email with patients: Factors influencing electronic communication and adherence to best practices. J Med Internet Res. March 24, 2006;8:e2.
9Rosen P, Kwoh CK. Patient-physician e-mail: An opportunity to transform pediatric health care delivery. Pediatrics. 2007;120:701–706.
10Houston TK, Sands DZ, Nash BR, Ford DE. Experiences of physicians who frequently use e-mail with patients. Health Commun. 2003;14:515–525.
11Katz SJ, Moyer CA, Cox DT, Stern DT. Effect of a triage-based e-mail system on clinic resource use and patient and physician satisfaction in primary care. J Gen Intern Med. 2003;18:736–744.
12Stalberg P, Yeh M, Ketteridge G, Delbridge H, Delbridge L. E-mail access and improved communication between patient and surgeon. Arch Surg. 2008;143:164–168.
13Hobbs J, Wald J, Jagannath YS, et al. Opportunities to enhance patient and physician e-mail contact. Int J Med Inform. 2003;70:1–9.
14Houston TK, Sands DZ, Jenckes MW, Ford DE. Experiences of patients who were early adopters of electronic communication with their physician: Satisfaction, benefits and concerns. Am J Manag Care. 2004;10:601–608.
15Freed DH. Patient-physician e-mail: Passion or fashion? Health Care Manag (Frederick). 2003;22:265–274.
16Paladine HL, Miller K, White B, Feifer C. Teaching physician-patient e-mail communication skills in a residency program. Fam Med. 2008;40:160–161.
17Dyrbye LN. Reflective teaching: The value of e-mail student journaling. Med Educ. 2005;39:524–525.
18Khraisat A, Shanaah A, Berland D, Cannady PB. Morning report emails: A unique model to improve the current format of an internal medicine training tradition. Med Teach. 2007;29:413.
19Boudreau JD, Jagosh J, Slee R, Macdonald ME, Steinert Y. Patients' perspectives on physicians' roles: Implications for curricular reform. Acad Med. 2008;83:744–753.
20Kane B, Sands DZ. Guidelines for the clinical use of electronic mail with patients. J Am Med Inform Assoc. 1998;5:104–111.