The dynamics of the physician–patient relationship are increasingly being recognized as an important determinant of patient satisfaction and hope, physician test and referral ordering patterns, and clinical outcomes.1–4 Interventions to improve physician–patient interaction and communication may improve outcomes and inform health care delivery reform.5,6 The Centers for Medicare and Medicaid Services explicitly incentivize improvements in patient experience through value-based purchasing, under which providers’ scores on an outpatient experience survey—the Consumer Assessment of Healthcare Providers and Systems Clinician and Group (CG-CAHPS) survey—are considered a quality outcome that influences accountable care organization reimbursement.7
The role of physician empathy in patient experience has not been fully explored. Previous studies have described preliminary associations between physician characteristics and empathy,8–10 but most of the research on this topic has not been from the United States.9–11 One large U.S. study8 and these other past studies were often limited to a few covariates, and many of these investigations were small or raised concerns for selection bias. Moreover, none of these studies assessed the impact of empathy on standardized measures of patient experience. In this analysis, as part of a larger study, we sought to (1) identify independent correlates of empathy using a large sample of physicians at a large integrated U.S. health system and (2) test the hypothesis that empathy is related to standardized measures of patient experience.
Beginning in 2013, all staff physicians in the Cleveland Clinic Health System were mandated to attend an internal, experiential communication skills training program during their regular work hours. This course was based on the Relationship: Establishment, Development, and Engagement (REDE) model, which emphasizes that relationship is a vital therapeutic agent of change and that communication skills can deepen the patient–provider relationship.12 In addition to the goal of improving physician communication at the Cleveland Clinic, this course was designed to be a potentially generalizable intervention for improving patient experience.13
Our study population included all Cleveland Clinic physicians who participated in the communication course between August 1, 2013, and March 15, 2015. On the day of training, course participants were asked to complete a precourse survey. This survey included self-report items on demographics and primary practice setting (inpatient, outpatient, or both), as well as a validated measure of empathy, the Jefferson Scale of Empathy (JSE).8,14,15 The JSE asked participants to use a seven-point Likert scale to indicate their agreement with 20 statements to generate a score ranging from 20 to 140. Higher JSE scores represented greater empathy. Participants could opt out of including their information in a registry for research purposes.
The registry was also populated with additional provider characteristics extracted from databases maintained by the Cleveland Clinic Office of Professional Staff Affairs, including confirmation of demographic information, years in practice, specialty/subspecialty, and provider degree. Participants’ single-visit and 12-month primary care CG-CAHPS scores from 6 months prior to the course date were also collected, using their National Provider Identification numbers. A request to use the deidentified data from the registry to complete this study was approved by the Cleveland Clinic Institutional Review Board.
For our correlates of empathy analysis, we excluded physicians who had incomplete data in the registry and physicians who practiced in a specialty in which there were fewer than 20 physicians with complete data in the registry. This cutoff was determined to facilitate meaningful comparisons between representative groups of clinicians and is consistent with others’ work in the field.8 For our analysis of associations between JSE scores and measures of patient experience, we included all physicians with a JSE score, specialty, and either a visit-specific or 12-month CG-CAHPS score for at least one of six measures of provider communication or for overall provider rating.
To investigate the association between provider characteristics and empathy, both bivariable and multivariable models were constructed. The association between continuous variable predictors (age and years of practice) and JSE scores was analyzed by linear regression. The association between categorical variables (race/ethnicity, specialty, provider degree, and practice setting) and JSE scores was analyzed by one-way analysis of variance (ANOVA) or Welch ANOVA as appropriate. All predictors of empathy significant at an alpha level of 0.1 in bivariable analysis were included in a multivariable linear model.
To investigate the association between JSE scores and continuous measures of patient experience, we calculated Spearman rank correlation coefficients individually between JSE scores and responses to each of the six provider communication items and the overall provider rating item on version 2.0 of the visit-specific and 12-month CG-CAHPS surveys.16 These items were coded as top box for analysis. “Top box” refers to the percentage of survey respondents (i.e., patients) who chose the most positive rating for each CG-CAHPS item. For the six provider communication items, the top-box ratings were “yes, definitely” on the visit-specific survey and “always” on the 12-month survey. For overall provider rating, the top-box ratings for both surveys were “9” and “10” on a 10-point scale ranging from “1 = worst provider possible” to “10 = best possible provider.” Data are described as frequency (%) for categorical variables and median (25th, 75th percentiles) or mean (SD) for continuous variables. All tests were two sided, and an alpha level of 0.05 was used to assess statistical significance. Analyses were performed using SPSS software version 22 (IBM, Armonk, New York).
Between August 1, 2013, and March 15, 2015, 1,550 physicians participated in the Cleveland Clinic Health System’s communication skills training program. Of these participants, 6 declined to have their information used for research purposes and 368 failed to provide complete demographic, professional, or empathy data, yielding a study sample of 1,176 physicians (75.9% response rate). Of these physicians, 329 practiced a specialty with fewer than 20 providers and were excluded from the correlates of empathy analysis, yielding a sample of 847 physicians for said analysis. Table 1 compares the demographic characteristics of these 847 physicians with those of the 679 course participants excluded from the correlates of empathy analysis.
Of the 1,176 physicians in the study sample, 583 had visit-specific CG-CAHPS scores and 277 had 12-month CG-CAHPS scores, meeting the inclusion criteria for the correlation analysis of JSE scores and standardized measures of patient experience. Characteristics of these two samples and the correlates of empathy sample are similar and presented in Table 2. Briefly, the correlates of empathy sample had a median age of 49.0 years (interquartile range [IQR] 41.0–57.0), and its median number of years practiced was 15.0 (IQR 7.0–25.0). It was mostly white (80.5%) and male (67.7%). Its mean JSE score was 116.6 (SD 12.5), with scores ranging from 47 to 140.
Correlates of empathy analysis: Associations between provider characteristics and empathy
In bivariable analyses (n = 847), specialty (P < .01), female sex (P < .001), outpatient practice setting (P < .05), and doctor of osteopathic medicine (DO) degree (P < .05) were found to be significant predictors of JSE scores (Table 3). Among specialties included in the analysis, psychiatry had the highest mean JSE score (mean = 122.6, SD = 16.3), followed by pediatrics (mean = 121.6, SD = 10.7) and obstetrics–gynecology (mean = 121.0, SD = 9.3).
In the multivariable linear regression model (n = 847; Table 4), with JSE score as the dependent variable, obstetrics–gynecology (β = 5.13; 95% CI, 0.63 to 9.63; P < .05), pediatrics (β = 4.45; 95% CI 0.35 to 8.55; P < .05), psychiatry (β = 7.47; 95% CI, 1.82 to 13.12; P < .05), and thoracic surgery (β = 6.42; 95% CI, 0.35 to 12.48; P < .05) were all associated with a higher JSE score when using internal medicine as the reference specialty category. Being female was also independently associated with a higher JSE score (β = 3.31; 95% CI, 2.19 to 6.11; P < .001). Finally, having a DO degree trended toward a significant association with a higher JSE score (β = 3.31; 95% CI, −0.045 to 6.67; P = .052). These differences represent an approximately 3% to 6% increase in JSE score for the aforementioned predictors. Practice location was not found to be a significant contributor to empathy in the multivariable model.
Correlation analysis of JSE scores and standardized measures of patient experience
Compared with physicians in the visit-specific CG-CAHPS sample (n = 583), physicians in the 12-month CG-CAHPS sample (n = 277) were less likely to be surgeons (38.3% vs 25.5%) and more likely to practice in primary care specialties such as family medicine, internal medicine, pediatrics, and obstetrics–gynecology (4.6% vs 63.2%).
Of the six standardized provider communication items on the visit-specific CG-CAHPS survey, patient ratings of physicians on four—the provider’s ability to explain information clearly, listen carefully, provide understandable information, and show respect—were significantly correlated with JSE scores, and patient ratings of the provider’s ability to know the medical history trended toward significance. Of the six standardized provider communication items on the 12-month CG-CAHPS survey, patient ratings of physicians on three—the provider’s ability to listen carefully, know the medical history, and show respect—were significantly correlated with JSE scores, and patient ratings of the provider’s ability to explain information clearly trended toward statistical significance. Finally, overall provider rating was significantly associated with JSE score for physicians in the visit-specific CG-CAHPS sample (rs = 0.160, P = .001), while the correlation between overall provider rating and JSE score trended toward statistical significance for physicians in the 12-month CG-CAHPS sample (rs = 0.110, P = .067; Table 5).
In this empirical study of 847 physicians within an integrated health system, we found that psychiatrists, pediatricians, and obstetricians/gynecologists had the highest levels of empathy. Additionally, in multivariable analysis, certain specialties and being female were independently associated with higher empathy scores. The differences seen between groups, though statistically significant, were generally small, ranging from a 3% to 6% change in JSE score for each respective predictor. More importantly, while any given correlation was generally weak, there were multiple significant associations between JSE scores and different aspects of physician communication, as measured by visit-specific and 12-month CG-CAHPS scores.
Others have studied the association between physician characteristics and empathy scores. However, many of these past studies used smaller convenience samples, and most did not focus on U.S. populations.9,11,14 In contrast, our study sample included a large group of Cleveland Clinic physicians who were mandated to participate in a communication skills training program, and a large proportion of these physicians consented to the use of their precourse data, including JSE scores, for research. Our sample—the largest of any study published to date—allowed us to identify previously unknown associations between physician characteristics and empathy. Our study is also the first to assess the relationship between JSE scores and standardized measures of patient experience.
In a study of physicians at one U.S. medical center, Hojat et al8 found that women had higher empathy scores, although this finding did not reach statistical significance. They also reported that only psychiatry empathy scores were significantly higher than those of select other specialties, but that the psychiatry scores were not significantly higher than those of internal medicine.8 In our study, female sex was significantly associated with higher empathy scores, and several specialties’ empathy scores were found to differ significantly from our internal medicine reference category’s scores. These specialties were psychiatry, pediatrics, obstetrics–gynecology, and thoracic surgery. Our results also differ from those reported in other countries. For example, in a study of Italian physicians, neither sex nor specialty was a significant predictor of empathy.11 This suggests that sociocultural factors may be determinants of empathy. In addition, we believe we are the first to report that DOs had higher empathy scores than did MDs. Although this finding was of borderline statistical significance, the magnitude was almost as large as that seen with sex or specialty.
Several explanations exist for the observed relationships between empathy and sex, specialty, and degree. First, considering specialty, medical students may self-select for specialties that they perceive will best match with their own empathy attributes.17 This might explain why empathy correlations with specialty choice are generally stable across medical students and attending physicians.18 Alternatively, it is possible that empathy-related education varies by specialty. Specialties such as psychiatry, pediatrics, and obstetrics–gynecology may include continued interpersonal education because of the nature of the specialty, whereas diagnostic radiology, for instance, may involve less empathy training because of the specialty’s technical focus. This might explain why some specialties that were classified as technology oriented (i.e., not people oriented) in a medical student empathy study18 were also associated with lower empathy in our model. This variation in empathy by specialty among practicing physicians might also be a function of the practice environment, which could explain the differences in empathy by specialty reported in our study compared with Hojat and colleagues’8 previous work, as well as why thoracic surgeons in our sample had particularly high empathy scores.9,18 Ultimately, the reasons for this variation across institutions represent an area for future exploration. Such research might identify best practices that could be adopted broadly to improve physician empathy and communication.
Whether it is feasible to teach empathy to health care professionals, in schools or by leadership example, is a subject of debate. Training may increase physician use of empathetic statements during patient encounters.19–22 A review on the topic of teaching empathy, however, noted that the studies that have explored the impact of education on empathy have often been hindered by small sample sizes and sources of bias.23 Nevertheless, trends in empathy among medical students have been well documented and can provide some insight into the complexities of how and when to teach empathy.24,25 For example, a study of U.S. medical students also found that specialty type (people oriented or technology oriented) and sex were significant predictors of empathy.18 Our study’s findings provide evidence of some degree of concordance between trends in practicing physician empathy and medical student empathy. This consistency between trends in medical student and physician data suggests that empathy might stabilize early in the physician’s career trajectory. Thus, interventions to improve physician empathy might be most beneficial if they are started earlier in the educational timeline.
There are also several potential explanations for the association found between female sex and higher empathy. Studies have suggested that women are better able to pick up on emotional signals than are men, implying a potentially intrinsic empathetic quality.26–28 Furthermore, multiple systematic reviews have concluded that female physicians tend to spend more time with their patients and to use more patient-centered communication methods, both of which might be influenced by empathy.29,30 Although our results thus mirror the literature, that reflection alone does not provide any causal mechanism for the observed relationship; that is, our results cannot speak to whether females are born predisposed to acting empathically or whether they develop the traits. However, in animal models and human studies, nurturing has been shown to inform nature and to play a role in informing gender differences in personality, behavior, and preferences.31–33 Therefore, it is plausible that targeted empathy education may be able to reduce this gap.
Finally, considering provider degree, the stated importance of a biopsychosocial model of healing within osteopathic medicine has been hypothesized to be a contributing factor to the stable level of empathy reported among osteopathic medical students throughout their undergraduate medical training.34 It is possible that this reverence for a holistic view of medicine extends to practicing physicians and translates as greater empathy among DO physicians when compared with their MD counterparts. However, this is a historically informed hypothesis35—to the best of our knowledge, there is no recent literature showing that DO curricula are more holistic than MD curricula, or that DOs take a more patient-centered approach in practice than do MDs. Future quantitative studies might reproduce our finding with a larger sample of DOs, while qualitative studies could explore the perspectives of MD and DO students and physicians with regard to their beliefs on the role of a biopsychosocial model in the care of patients.
We found that patient ratings on three of six provider communication items were significantly correlated with physician empathy among physicians in the 12-month CG-CAHPS sample. Furthermore, patient ratings on four of the six provider communication items and the overall provider rating were significantly correlated with physician empathy among physicians in the visit-specific CG-CAHPS sample. Past studies have explored the relationship between physician communication skills and patient satisfaction. For instance, one group found an association between hospitalist empathy and decreased patient anxiety as well as increased patient rating of the hospitalist.36 But these studies often have not used standardized empathy scales or standardized measures of patient experience.37–39 In our study, empathy correlated with patient ratings on CG-CAHPS questions that may be more subjectively interpreted: For example, there was no relationship between empathy and patient ratings of how much time the physician spent with the patient. This suggests that empathy may contribute most to the more subjective components of patient experience, such as being listened to and respected, but that other factors may have a more significant effect on other components of a patient’s experience.
These results suggest that programs to increase provider empathy may be a potential way to improve patient satisfaction scores. This investment may pay dividends as health systems are being increasingly assessed and reimbursed in accordance with principles of value-based care.40–42 This study thus adds to the body of literature that describes how provider empathy may be central to physician–patient interaction and, by extension, influence clinical health outcomes. For example, it has been suggested that provider empathy may aid in reducing inappropriate antibiotic prescribing for acute respiratory infections,43 be correlated with less severe inflammatory markers associated with the common cold,44 and be associated with better control of hemoglobin A1c and LDL among patients with diabetes.45 While the strength of any given correlation between a CG-CAHPS score and empathy was weak in this study—suggesting that empathy may be only one of many potential impactful factors of patient experience—that significant correlations existed across several dimensions of patient experience adds credence to the idea that physician empathy plays an important role in satisfactory care encounters. It is therefore of interest to continue to determine what additional predictors of patient experience exist, and how the interplay between empathy and other to-be-determined physician and institutional factors might be connected to standardized clinical and patient experience measures.
This study had several limitations. First, this study was empirical in nature. Therefore, inferences cannot be drawn about causal relationships between the variables. Additionally, this study was conducted at one health system. While our results generally confirmed what has been presented in smaller studies, our results may not be applicable beyond large integrated health systems like the Cleveland Clinic Health System. Additionally, constructing a linear regression model using the predictors found to be significant in bivariable analysis may lead to model overfit, also potentially limiting the applicability of the model. However, backward linear regression yielded an identical final model, suggesting that the findings may be generalizable. Furthermore, while the high response rate and use of reliable data sources may limit recall, response, and some other forms of bias, there is no way to completely rule out said confounders or a potential social desirability bias when completing the empathy questionnaire. Considering these limitations, the results should be generalized with the appropriate cautions.
Our study shows that, after adjusting for a number of demographic and professional covariates, specialty and sex are statistically significant predictors of physician empathy, while provider degree is a borderline significant predictor. Furthermore, we found that physician empathy is correlated with multiple standardized measures of patient experience. Building upon these results, future work should explore what aspects of certain physicians’ practices—such as the practices of those working in obstetrics–gynecology, psychiatry, and pediatrics—could be adopted to improve levels of empathy among all practitioners. Additionally, future work should focus on better characterizing the relationship between empathy and patient satisfaction as well as exploring whether efforts to improve empathy, such as education, might lead to improved patient experience.
1. Stewart M, Brown JB, Donner A, et al. The impact of patient-centered care on outcomes. J Fam Pract. 2000;49:796804.
2. Robinson JD, Hoover DR, Venetis MK, Kearney TJ, Street RL Jr.. Consultations between patients with breast cancer and surgeons: A pathway from patient-centered communication to reduced hopelessness. J Clin Oncol. 2013;31:351358.
3. Mazor KM, Beard RL, Alexander GL, et al. Patients’ and family members’ views on patient-centered communication during cancer care. Psychooncology. 2013;22:24872495.
4. Stewart MA. Effective physician–patient communication and health outcomes: A review. CMAJ. 1995;152:14231433.
5. Kelley JM, Kraft-Todd G, Schapira L, Kossowsky J, Riess H. The influence of the patient–clinician relationship on healthcare outcomes: A systematic review and meta-analysis of randomized controlled trials. PLoS One. 2014;9:e94207.
6. Cals JW, Butler CC, Hopstaken RM, Hood K, Dinant GJ. Effect of point of care testing for C reactive protein and training in communication skills on antibiotic use in lower respiratory tract infections: Cluster randomised trial. BMJ. 2009;338:b1374.
7. Centers for Medicare and Medicaid Services. Medicare Shared Savings Program: Shared savings and losses and assignment methodology: Specifications. Version 3. https://www.cms.gov/Medicare/Medicare-Fee-for-Service-Payment/sharedsavingsprogram/Downloads/Shared-Savings-Losses-Assignment-Spec-v2.pdf
. Published December 2014. Accessed January 30, 2017.
8. Hojat M, Gonnella JS, Nasca TJ, Mangione S, Vergare M, Magee M. Physician empathy: Definition, components, measurement, and relationship to gender and specialty. Am J Psychiatry. 2002;159:15631569.
9. Kataoka HU, Koide N, Hojat M, Gonnella JS. Measurement and correlates of empathy among female Japanese physicians. BMC Med Educ. 2012;12:48.
10. Lelorain S, Sultan S, Zenasni F, et al. Empathic concern and professional characteristics associated with clinical empathy in French general practitioners. Eur J Gen Pract. 2013;19:2328.
11. Di Lillo M, Cicchetti A, Lo Scalzo A, Taroni F, Hojat M. The Jefferson Scale of Physician Empathy: Preliminary psychometrics and group comparisons in Italian physicians. Acad Med. 2009;84:11981202.
12. Windover AK, Boissy A, Rice TW, Gilligan T, Velez VJ, Merlino J. The REDE model of healthcare communication: Optimizing relationship as a therapeutic agent. J Patient Exp. 2014;1(1):813.
13. Boissy A, Windover AK, Bokar D, et al. Communication skills training for physicians improves patient satisfaction. J Gen Intern Med. 2016;31:755761.
14. Hojat M, Mangione S, Nasca TJ, et al. The Jefferson Scale of Physician Empathy: Development and preliminary psychometric data. Educ Psychol Meas. 2001;61(2):349365.
15. Tavakol S, Dennick R, Tavakol M. Psychometric properties and confirmatory factor analysis of the Jefferson Scale of Physician Empathy. BMC Med Educ. 2011;11:54.
16. Agency for Healthcare Research and Quality. Get the Clinician and Group Survey and instructions. CG-CAHPS 12-Month Survey 2.0 and instructions; CG-CAHPS Visit Survey 2.0 and instructions. https://www.ahrq.gov/cahps/surveys-guidance/cg/instructions/index.html
. Accessed February 13, 2017.
17. Harsch HH. The role of empathy in medical students’ choice of specialty. Acad Psychiatry. 1989;13:9698.
18. Chen D, Lew R, Hershman W, Orlander J. A cross-sectional measurement of medical student empathy. J Gen Intern Med. 2007;22:14341438.
19. Brunero S, Lamont S, Coates M. A review of empathy education in nursing. Nurs Inq. 2010;17:6574.
20. Goldstein AP, Goedhart A. The use of structured learning for empathy enhancement in paraprofessional psychotherapist training. J Community Psychol. 1973;1(2):168173.
21. Kirk WG, Thomas AH. A brief inservice training strategy to increase levels of empathy of psychiatric nursing personnel. J Psychiatr Treat Eval. 1982;4:177179.
22. Bonvicini KA, Perlin MJ, Bylund CL, Carroll G, Rouse RA, Goldstein MG. Impact of communication training on physician expression of empathy in patient encounters. Patient Educ Couns. 2009;75:310.
23. Stepien KA, Baernstein A. Educating for empathy. A review. J Gen Intern Med. 2006;21:524530.
24. Hojat M, Mangione S, Nasca TJ, et al. An empirical study of decline in empathy in medical school. Med Educ. 2004;38:934941.
25. Hojat M, Vergare MJ, Maxwell K, et al. The devil is in the third year: A longitudinal study of erosion of empathy in medical school. Acad Med. 2009;84:11821191.
26. Rosip JC, Hall JA. Knowledge of nonverbal cues, gender, and nonverbal decoding accuracy. J Nonverbal Behav. 2004;28(4):267286.
27. Elfenbein HA, Marsh AA, Ambady N. Barrett LF, Salovey P. Emotional intelligence and the recognition of emotion from facial expressions. In: The Wisdom in Feeling: Psychological Processes in Emotional Intelligence. 2002:New York, NY: Guilford Press; 3759.
28. Hall JA. Gender effects in decoding nonverbal cues. Psychol Bull. 1978;85(4):845857.
29. Roter DL, Hall JA. Physician gender and patient-centered communication: A critical review of empirical research. Annu Rev Public Health. 2004;25:497519.
30. Cooper LA, Roter DL, Johnson RL, Ford DE, Steinwachs DM, Powe NR. Patient-centered communication, ratings of care, and concordance of patient and physician race. Ann Intern Med. 2003;139:907915.
31. Wallen K. Nature needs nurture: The interaction of hormonal and social influences on the development of behavioral sex differences in rhesus monkeys. Horm Behav. 1996;30:364378.
32. Lippa RA. Gender, Nature, and Nurture. 2005.Mahwah, NJ: Lawerence Erlbaum Associates.
33. Gneezy U, Leonard KL, List JA. Gender Differences in Competition: Evidence From a Matrilineal and a Patriarchal Society. Working paper 13727. 2008. Cambridge, MA: National Bureau of Economic Research; http://www.nber.org/papers/w13727.pdf
. Accessed January 30, 2017.
34. Kimmelman M, Giacobbe J, Faden J, Kumar G, Pinckney CC, Steer R. Empathy in osteopathic medical students: A cross-sectional analysis. J Am Osteopath Assoc. 2012;112:347355.
35. Weiner JP. Expanding the US medical workforce: Global perspectives and parallels. BMJ. 2007;335:236238.
36. Weiss R, Vittinghoff E, Anderson WG. Hospitalist empathy is associated with decreased patient anxiety and higher ratings of communication in admission encounters. J Hosp Med. 2016;11(suppl 1). http://www.shmabstracts.com/abstract/hospitalist-empathy-is-associated-with-decreased-patient-anxiety-and-higher-ratings-of-communication-in-admission-encounters/
. Accessed January 30, 2017.
37. Williams S, Weinman J, Dale J. Doctor–patient communication and patient satisfaction: A review. Fam Pract. 1998;15:480492.
38. Kim SS, Kaplowitz S, Johnston MV. The effects of physician empathy on patient satisfaction and compliance. Eval Health Prof. 2004;27:237251.
39. Pollak KI, Alexander SC, Tulsky JA, et al. Physician empathy and listening: Associations with patient satisfaction and autonomy. J Am Board Fam Med. 2011;24:665672.
40. Fowler L, Saucier A, Coffin J. Consumer assessment of healthcare providers and systems survey: Implications for the primary care physician. Osteopath Fam Physician. 2013;5(4):153157.
41. Burwell SM. Setting value-based payment goals—HHS efforts to improve U.S. health care. N Engl J Med. 2015;372:897899.
42. Dupree JM, Neimeyer J, McHugh M. An advanced look at surgical performance under Medicare’s hospital-inpatient value-based purchasing program: Who is winning and who is losing? J Am Coll Surg. 2014;218:17.
43. Colgan R, Powers JH. Appropriate antimicrobial prescribing: Approaches that limit antibiotic resistance. Am Fam Physician. 2001;64:9991004.
44. Rakel D, Barrett B, Zhang Z, et al. Perception of empathy in the therapeutic encounter: Effects on the common cold. Patient Educ Couns. 2011;85:390397.
45. Hojat M, Louis DZ, Markham FW, Wender R, Rabinowitz C, Gonnella JS. Physicians’ empathy and clinical outcomes for diabetic patients. Acad Med. 2011;86:359364.