With the increasing use of electronic medical records (EMRs) and computer-based history taking in diagnostic and therapeutic settings, physicians’ empathetic engagement will need to adjust to the inherent physician–patient communication challenges—a concept described by Lodyga et al1 as “digitally adapted empathy.” The Health Information Technology for Economic and Clinical Health (HITECH) Act, enacted as a component of the American Recovery and Reinvestment Act of 2009,2 included billions of dollars for the development and implementation of EMRs.3 Furthermore, the Institute of Medicine has argued strongly for the increased use of EMRs to improve delivery of preventive services, adherence to evidence-based guidelines, scrutiny of drug therapy, and coordination of care, and to reduce medical errors.4–7 In light of the support and growth of EMR use in medical settings, many popular-media articles8–10 and research papers4,7,11–13 have postulated and examined the effects on physician–patient communication of introducing the EMR as a “third party”12 in the exam room. Several studies have suggested strategies to more effectively incorporate EMRs into clinical practice without compromising the physician–patient relationship, including sharing computer screens with patients,14 designing EMRs to contain patients’ personal characteristics and narratives to aid physicians in acknowledging more than just the patient’s biological well-being,15 and ensuring efficient designs for office space.14
Similar to experienced clinicians, physicians-in-training grapple with the complexities of using EMRs while practicing their clinical skills. Some researchers12,14,16,17 have examined the effects of EMRs on the medical student clinical experience. Chumley et al,16 for example, found better documentation of pain characteristics by first-year medical students using an EMR. Other benefits of EMRs in medical student training are suggested by studies showing that students ask more history questions because of EMR prompts.12,14,16 However, in a survey of third-year medical students, Rouf et al6 found that 48% of respondents reported spending less time looking at patients, and 34% reported spending less time talking to patients because of EMRs.
Medical students often learn interviewing and history-recording skills through writing notes rather than using computer-based technology.12 The literature suggests, however, that EMR-specific skills are not acquired spontaneously but, rather, through training and experience.5,12,13 The Alliance for Clinical Education has suggested that medical student competencies regarding EMR docu mentation should be assessed prior to graduation.18 Yet although EMR-specific communication skills curricula are beginning to be implemented by U.S. medical schools, few studies to date have examined the effects of these curricula on medical student education.5,15,19
For many years, third-year students at Sidney Kimmel Medical College at Thomas Jefferson University consistently avoided using the EMR when they were videotaped interacting with a patient as part of a required assessment during the pediatric clerkship. Anecdotal reports suggested that the reason was concern that their interpersonal skills while communicating with the patient to obtain a history would be rated lower. Given the aforementioned reports that use of EMRs may adversely affect physician–patient communication and physicians’ empathic engagement in patient care, we designed this study to examine whether a specific intervention to teach proper use of the EMR in a clinical setting could help improve students’ empathic engagement. We hypothesized that the training would reduce the communication hurdles in clinical encounters.
This study was reviewed and approved by the Thomas Jefferson University institutional review board.
Participants included 70 third-year students at Sidney Kimmel Medical College. During the 2012–2013 academic year, third-year students were eligible to participate in the study while on their regularly scheduled six-week pediatric clerkship if their outpatient assignment was at a site using the Epic EMR system. Prior to the study, all participants had completed the medical school’s Introduction to Clinical Medicine course, in which a case-based standardized patient (SP) program focuses on obtaining histories and improving communication skills in the first and second years.
Compared with other studies examining outcomes related to students’ use of EMRs,5,16,17 our sample size was fairly large. Although a smaller number of participants could have provided meaningful data, the large class size enabled our recruiting as many consenting students as the clerkship budget for SPs would accommodate.
To measure students’ empathy, we used the Jefferson Scale of Empathy (JSE) and the Jefferson Scale of Patient Perceptions of Physician Empathy (JSPPPE). We used separate rating scales to measure students’ communication skills and history-taking skills, as described below.
Developed via literature reviews and pilot studies in response to a gap in instruments available to measure empathy specific to medical education, the JSE serves as a student self-report of attitudes and orientation toward empathic engagement in patient care, based on a definition of empathy as a primarily cognitive attribute.20,21 The scale consists of 20 items answered on a seven-point Likert-type scale (1 = strongly disagree to 7 = strongly agree), with a higher score indicating a more empathetic attitude toward patient care (possible range of scores: 20–140). Sample items are “It is difficult for a physician to view things from patients’ perspectives” and “Patients feel better when their physicians understand their feelings.” In a series of studies, the JSE has demonstrated construct validity,22,23 criterion-related validity,23,24 predictive validity,25 internal consistency reliability,22,23 and test–retest reliability.22
The JSPPPE was designed to assess a patient’s or an SP’s perception of a physician’s or medical student’s empathy.26,27 The original scale23 consists of five items. We used the six-item version of the scale, which was previously used by Berg et al20 in their study of SP assessment of medical student empathy. In our study, items were answered on a five-point Likert-type scale (poor = 1, fair = 2, good = 3, very good = 4, excellent = 5), with a higher score indicating more empathic engagement (possible range of scores: 1–5). An example item is, “Did the student seem concerned about me and my family?” Psychometric evidence supporting the reliability and validity of the scale has been shown in studies of internal medicine residents26 and family medicine residents.27
Communication skills and history-taking skills rating scales.
The communication skills scale consisted of 10 items related to communication and interpersonal skills answered on a five-point Likert-type scale (poor = 1, fair = 2, good = 3, very good = 4, excellent = 5). A sample item is “Clarity of Questions: Asked clear questions, one question at a time. Avoided leading questions.” The history-taking skills scale consisted of 3 items related to the SALTED technique for EMR use (described below) answered on a similar five-point Likert-type scale An example item is, “How well did the student set up the room with the computer–patient–physician triangle in mind?”
Procedures and intervention
All recruited students participated in an informed consent interview and had the opportunity to be excused from the study if they did not wish to move forward with the consent process. Students were not compensated for their participation. Each student was assigned a unique study number, which was kept separate from their identification to protect their identity during the study and to ensure that the study would have no effect on their grade for their pediatric clerkship.
Participants in each six-week clerkship block were randomly assigned to the intervention group (n = 38) or to the control group (n = 32). All control and intervention students participated in the regular clinic training on EMR use as scheduled for the clerkship. The intervention group underwent an additional one-hour training session on EMR-specific communication skills on their first day of orientation. That intervention session began with a discussion of the positives and negatives of EMR use, illustrated through three video clips in which the EMR is used poorly, moderately, and well. Then, the SALTED mnemonic—Set-up, Ask, Listen, Type, Exceptions, and Documentation—was introduced, and the technique it teaches for proper use of EMRs in clinical settings (see Box 1 for details) was discussed. Students then practiced applying the SALTED technique in four brief role-playing scenarios.
The SALTED Mnemonic and Technique for Proper Use of Electronic Medical Records in Clinical Settings
Talking to a screen can make your patient encounter very bland and dissatisfying. Here are some tips to enhance communication.
S Set up
- “I’m going to sign into Jasmine’s chart, then we can talk more about what’s going on.”
- “I’m looking at Dominic’s growth chart. Would you like to take a look?”
- •Assess for mechanical difficulties, then move on
- •Remove barriers and set up best patient/parent–doctor–computer triangle possible
Adjust screen position, move chairs and stools, direct patients to switch seats
- Model the timing and behavior of old-fashioned note writing
- Never type while you listen. Patients don’t feel heard.
- Pause to interact/react
- Clarify what the parent/patient has told you: “Let me check if I have this right…”
- Try to avoid prolonged periods of typing
- Guide the patient: “You’ve told me a lot. Now I will need a minute to type in the details.”
- There are special circumstances that might require stepping out from behind the screen or keyboard
- Consider full focus on the patient/parent if bad news is divulged, you uncover a positive screen for abuse, or your patient is at risk (injured, in danger, in marked respiratory distress)
- Real-time documentation is helpful
- Involve the patient/parent when typing out the plan
- Avoiding jargon may allow for the plan to double as patient instructions
- You can check for accuracy before closing your note
During the final week of their pediatric clerkship, all participants completed a video-recorded 15-minute encounter with an SP who was portraying a 17-year-old female with asthma. Students were required to document their findings on the EMR during this encounter.
All participating students completed the JSE twice: once at the beginning of the six-week block before the regular clinic EMR training (and the intervention session) took place, and again at the end of the block after the SP encounter. Following each encounter with a student, the SP completed the JSPPPE as well as the communication skills rating scale and the history-taking skills scale (with the three items specific to the SALTED technique). The two SPs participating in the study were trained to complete checklists and rating scales, including the JSPPPE.
Two faculty members (M.S., J.W.C.), who were trained regarding the rating scales, watched all videotaped encounters and completed the same scales as the SPs. The SP and faculty raters were blinded to whether students were in the intervention or control group. Interrater agreement was established. (Kappa values were not generated for the JSE or JSPPPE because this is not a validation study, but all raters received the same training.)
Differences in mean scores between the intervention and control groups were tested by using independent t tests. Paired t tests were used to test the pretest–posttest difference within groups. Differences in categorical variables were evaluated using chi-square. SAS version 9.3 for Windows (SAS Institute, Cary, North Carolina) was used for statistical analyses.
Student scores on the JSE
No statistically significant difference was observed between the pretest JSE mean scores of students in the intervention group (mean = 113.08, standard deviation [SD] = 8.36) and students in the control group (mean = 113.12, SD = 11.90) (P = .98). Also, no significant differences were found between the intervention and control groups on gender, Medical College Admission Test (MCAT) scores, and United States Medical Licensing Examination Step 1 scores.
At the end of the clerkship, the JSE mean score for the intervention group increased slightly to 113.9 (SD = 11.0), whereas the mean score for the control group decreased to 111.5 (SD = 16.4). However, the change in JSE score was not statistically significant for either the intervention group (P = .57) or the control group (P = .41). Also, as shown in Table 1, although the posttest JSE mean score for the intervention group was higher than that for the control group, the difference was not statistically significant (P =.47), although the pattern of findings was in the expected direction.
Faculty ratings of students’ empathetic engagement and skills
The faculty mean ratings on the three assessment measures (JSPPPE, history-taking skills, and communication skills) were significantly higher for students in the intervention group than for those in the control group (see Table 1). For the JSPPPE, the effect size was 0.83. The effect size estimates were 0.86 for history-taking skills and 0.60 for communication skills. Effect size estimates of these magnitudes are not trivial; they indicate that the obtained differences between the two groups in ratings by faculty were of practical (clinical) importance.28
SP ratings of students’ empathetic engagement and skills
The SP mean ratings on history-taking skills were significantly higher for students in the intervention group than for students in the control group (P =.05; Table 1). Although the SP mean rating on the JSPPPE was also higher for the intervention group, the difference (P = .07) did not reach the conventional level of statistical significance. SP mean ratings on communications skills were nearly identical for both the intervention and control groups.
Comparison of faculty and SP ratings
The faculty mean ratings for students in both the intervention and control groups on the JSPPPE, history-taking skills scale, and communication skills scale were significantly higher (P < .001) in comparison with the SP mean ratings.
Our study supports and builds on the findings of Morrow et al,5 who examined the implementation of EMR-specific communication skills training among first-year medical students during their history-taking and communication skills curriculum. Their study demonstrated that EMR-specific skills are not acquired spontaneously: Students who received training in EMR communication performed better in EMR-specific communication skills than did a control group as rated by an SP, but there were no differences in general communication skills between the intervention and control groups.5 Our study took place during a core clerkship in the third year of medical school. While the final test of our intervention on proper use of the EMR during patient care used SPs for ease of rating and comparison, the ongoing application of the skills the intervention group learned during the one-hour training session occurred with actual patients in the hospital and clinic setting throughout the six-week clerkship.
It is interesting to note that our short and simple training session contributed to a trend toward higher self-reported empathy, as measured by the JSE. This trend, however, was not statistically significant, which may be due to a six-week clerkship being too short a period to make a pronounced improvement in self-reported empathy. However, faculty rated students’ empathic engagement in patient care, history-taking skills, and communication skills as being significantly better in the intervention group compared with the control group. Similarly, SPs rated the history-taking skills of students in the intervention group, who learned the SALTED mnemonic and technique, as significantly better than those of their counterparts in the control group. Such a pattern of significant findings was not observed in the SPs’ ratings of students’ empathy and communication skills. These findings are consistent with reports that SPs can more reliably assess well-defined technical skills, such as history-taking and physical examination, than social skills, such as empathy and communication.29,30 However, because the intervention training session did include participating in role-plays, it is possible that the role-plays alone could have bolstered communication and empathy skills and may account for differences between groups.
Several factors provide plausible explanations for our expecting positive outcomes of the intervention for empathic engagement in patient care. Discussion of the video clips of EMR encounters in addition to the SALTED training added to students’ awareness of negative and positive aspects of the EMR, and thus may have contributed to improved positive encounters. Also, role-playing has been discussed as one of the approaches to improving empathy in patient care.31 Thus, the four brief role-playing practice scenarios in which the intervention group participated were expected to improve empathic engagement.32 In addition, components of the SALTED technique in history taking—such as listening then typing, and clarifying and validating patients’ concerns—are among the building blocks of empathic engagement in patient care.33
Further research regarding the implementation of EMR-specific training in medical school and its effect on empathetic physician–patient communication is needed. It would be interesting to further evaluate whether the documented progress notes written by students with higher scores on communication skills and empathy contained comprehensive and thorough information on the patient’s chief complaint and medical history, or whether superior performance in these areas was coupled with less effective documentation in the EMR. A study by Yudkowsky et al19 found that while fourth-year students who had used EMRs throughout their third year of medical school could maintain effective patient-centered communication, they failed to retrieve and to question the SP on important information embedded in the EMR.
Limitations and strengths
In consenting for the study, students in both groups were made aware that the study examined how the training may improve empathy, which could have led to some bias. This study involved a single institution, which limits the generalizability of our results. Pretest–posttest comparisons on all assessment measures (JSPPPE, history-taking skills, communication skills) in the intervention and control groups could have added to the strength of the study. Also, examining the effects of the training intervention after six weeks does not explore the long-term sustainability of skills.
Among the strengths of the study are the random assignment of students into the intervention and control groups; a lack of significant differences between the two groups on gender composition, age, previous academic performance (MCAT scores and USLME Step 1 scores); and self-reported empathy prior to the experiment (i.e., on the pretest JSE). Thus, obtained differences between the intervention and control groups cannot be attributed to the confounding effects of these variables.
Our study findings generally suggest that a simple intervention providing specialized training regarding EMR-specific communication skills can improve third-year medical students’ empathic engagement in patient care, history-taking skills, and communication skills as rated by faculty. Also, SP ratings suggested an improvement in history-taking skills as a result of the EMR-specific training.
Our findings emphasize the importance of including EMR-specific communication skills training in the medical school curriculum. The overall findings—that faculty mean ratings on empathic engagement, communication skills, and history-taking skills were significantly higher for students in the intervention group than those in the control group, and that SP mean ratings on history-taking skills were higher for the intervention group—suggest that the EMR-related hurdles in communication, history taking, and empathic engagement in a clinical setting can be addressed by proper use of the EMR, confirming our research hypothesis. Further multi-institutional research is needed for generalization of our findings.
Acknowledgments: The authors would like to thank Lauren Daly and Debra Eckhardt of Nemours Children’s Clinic for their research coordination efforts for this study.
1. Lodyga M, Fredericks M, Ross M, Kondellas B. EMR: Call for empathy in the patient–clinician relationship within a technological milieu: Implications for professional nursing practice. Electron J Health Inform. 2011;6(3):e23.
2. U.S. Department of Health and Human Services. HITECH Act enforcement interim final rule. http://www.hhs.gov/ocr/privacy/hipaa/administrative/enforcementrule/hitechenforcementifr.html
. Accessed August 16, 2016.
3. Peled JU, Sagher O, Morrow JB, Dobbie AE. Do electronic health records help or hinder medical education? PLoS Med. 2009;6:e1000069.
4. Rouf E, Whittle J, Lu N, Schwartz MD. Computers in the exam room: Differences in physician–patient interaction may be due to physician experience. J Gen Intern Med. 2007;22:4348.
5. Morrow JB, Dobbie AE, Jenkins C, Long R, Mihalic A, Wagner J. First-year medical students can demonstrate EHR-specific communication skills: A control-group study. Fam Med. 2009;41:2833.
6. Rouf E, Chumley HS, Dobbie AE. Electronic health records in outpatient clinics: Perspectives of third year medical students. BMC Med Educ. 2008;8:13.
7. Ventres WB, Frankel RM. Patient-centered care and electronic health records: It’s still about the relationship. Fam Med. 2010;42:364366.
8. Walker EP. Do computers in exam rooms hinder communication? MedPage Today. June 22, 2012. http://www.medpagetoday.com/InOtherWords/33428
. Accessed August 16, 2016.
9. Neergaard L. Doctors learn how to keep the human touch in technology. Huffpost Healthy Living. March 2012. http://www.huffingtonpost.com/2012/03/29/human-touch-technology-doctors_n_1387595.html
. Accessed May 22, 2014. [No longer available.]
10. What bugs Americans most about their doctors. Consum Rep. June 2013. http://www.consumerreports.org/cro/magazine/2013/06/what-bugs-you-most-about-your-doctor/index.htm
. Accessed August 16, 2016.
11. Frankel R, Altschuler A, George S, et al. Effects of exam-room computing on clinician–patient communication: A longitudinal qualitative study. J Gen Intern Med. 2005;20:677682.
12. Lown BA, Rodriguez D. Commentary: Lost in translation? How electronic health records structure communication, relationships, and meaning. Acad Med. 2012;87:392394.
13. Verghese A. Culture shock—Patient as icon, icon as patient. N Engl J Med. 2008;359:27482751.
14. Fonville A, Choe EK, Oldham S, Kientz JA. Exploring the use of technology in healthcare spaces and its impact on empathetic communication. In: IHI2010: Proceedings of the 1st ACM International Health Informatics Symposium. 2010:New York, NY: Association of Computing Technology; 497501.
15. Choe EK, Duarte ME, Kientz JA. Empathy in health technologies. Paper presented at: CHI 2010 Workshop on Interactive Systems in Healthcare (WISH); April 2010; Atlanta, GA.
16. Chumley H, Kennedy M, Shah B, Dobbie A. First-year medical students document more pain characteristics when using an electronic health record. Fam Med. 2008;40:462463.
17. Morrow JB, Dobbie A. Using the electronic health record to enhance student learning. Fam Med. 2010;42:1415.
18. Hammoud MM, Dalymple JL, Christner JG, et al. Medical student documentation in electronic health records: A collaborative statement from the Alliance for Clinical Education. Teach Learn Med. 2012;24:257266.
19. Yudkowsky R, Galanter W, Jackson R. Students overlook information in the electronic health record. Med Educ. 2010;44:11321133.
20. Berg K, Majdan JF, Berg D, Veloski J, Hojat M. Medical students’ self-reported empathy and simulated patients’ assessments of student empathy: An analysis by gender and ethnicity. Acad Med. 2011;86:984988.
21. Hojat M. Empathy in Patient Care: Antecedents, Development, Measurement, and Outcomes. 2007.New York, NY: Springer.
22. Hojat M, Gonnella JS, Nasca TJ, Mangione S, Vergare M, Magee M. Physician empathy: Definition, components, measurement, and relationship to gender and specialty. Am J Psychiatry. 2002;159:15631569.
23. Hojat M, Mangione S, Nasca TJ, et al. The Jefferson Scale of Physician Empathy: Development and preliminary psychometric data. Educ Psychol Meas. 2001;61:349365.
24. Hojat M, Gonnella JS, Mangione S, et al. Empathy in medical students as related to academic performance, clinical competence and gender. Med Educ. 2002;36:522527.
25. Hojat M, Mangione S, Nasca TJ, Gonnella JS, Magee M. Empathy scores in medical school and ratings of empathic behavior in residency training 3 years later. J Soc Psychol. 2005;145:663672.
26. Kane GC, Gotto JL, Mangione S, West S, Hojat M. Jefferson Scale of Patient’s Perceptions of Physician Empathy: Preliminary psychometric data. Croat Med J. 2007;48:8186.
27. Glaser KM, Markham FW, Adler HM, McManus PR, Hojat M. Relationships between scores on the Jefferson Scale of Physician Empathy, patient perceptions of physician empathy, and humanistic approaches to patient care: A validity study. Med Sci Monit. 2007;13:CR291CR294.
28. Hojat M, Xu G. A visitor’s guide to effect sizes: Statistical significance versus practical (clinical) importance of research findings. Adv Health Sci Educ Theory Pract. 2004;9:241249.
29. Flin R, Maran R. Identifying and training non-technical skills for teams in acute medicine. Qual Saf Health Care. 2004;13(suppl 1):8084.
30. Carlson J, Min E, Bridges D. The impact of leadership and team behavior on standard of care delivered during human patient simulation: A pilot study for undergraduate medical students. Teach Learn Med. 2009;21:2432.
31. Hojat M. Ten approaches for enhancing empathy in health and human services cultures. J Health Hum Serv Adm. 2009;31:412450.
32. Van Winkle LJ, Fjortoft N, Hojat M. Impact of a workshop about aging on the empathy scores of pharmacy and medical students. Am J Pharm Educ. 2012;76:9.
33. Hojat M. Empathy in Health Professions Education and Patient Care. 2016.New York, NY: Springer International Publishing.