Secondary Logo

Journal Logo

Reviews

Tools to Assess Behavioral and Social Science Competencies in Medical Education: A Systematic Review

Carney, Patricia A. PhD; Palmer, Ryan T. EdD; Fuqua Miller, Marissa; Thayer, Erin K.; Estroff, Sue E. PhD; Litzelman, Debra K. MD; Biagioli, Frances E. MD; Teal, Cayla R. PhD; Lambros, Ann PhD; Hatt, William J.; Satterfield, Jason M. PhD

Author Information
doi: 10.1097/ACM.0000000000001090

Abstract

In a 2004 report, the Institute of Medicine (IOM) concluded that, although 50% of the causes of premature morbidity and mortality are related to behavioral and social factors, medical school curricula in these areas are insufficient.1–3 The behavioral and social science (BSS) domains that the IOM deemed critical in their report included (1) mind–body interactions in health and disease, (2) patient behavior, (3) physician role and behavior, (4) physician–patient interactions, (5) social and cultural issues in health care, and (6) health policy and economics.1 Within these six domains, the IOM identified 26 high-priority topics, such as health risk behaviors, principles of behavior change, ethics, physician well-being, communication skills, socioeconomic inequalities, and health care systems design.1 The Association of American Medical Colleges similarly identified core BSS content areas and connected them with other educational frameworks, including the Canadian Medical Education Directions for Specialists competency framework and the Accreditation Council for Graduate Medical Education (ACGME) core competencies.4

In addition, the Liaison Committee on Medical Education (LCME) incorporates, as part of its educational program requirements for accreditation, BSS domains5 and requires that schools identify the competencies in these areas that both the profession and the public can expect of a practicing physician. Medical schools must use both content and outcomes-based assessments to demonstrate their learners’ progress toward and achievement of these competencies. To do so, many schools use the broad ACGME core competencies—professionalism, medical knowledge, patient care, interpersonal skills and communication, systems-based practice, and practice-based learning and improvement.6 Within these six categories, BSS competencies are nested among other milestones intended to mark learners’ progression toward knowledge and skill acquisition. At present, no fully articulated, standardized list of BSS competencies exists, nor has there been a cross-translation of the LCME standards, the IOM-defined BSS domains, and the ACGME core competencies.

This lack of standardization makes it difficult to pool evaluation data collected across medical schools, which could help evaluate the effectiveness of different training models or instructional designs for BSS curricula. Moreover, determining the levels of achievement of entrustable professional activities or milestones7 as well as conducting rigorous educational research require that measures of competency development are validated. However, often this important step is skipped entirely, not fully completed, or lacks the rigor needed to produce reliable results. Given the breadth of the competency assessment literature and the existence of contradictory or incomplete findings, a systematic review of published work will be valuable to educators as well as administrators seeking to satisfy the LCME standards and instruct their learners in the ACGME core competencies.

Thus, we conducted a systematic review to identify and evaluate the quality of the assessment tools used to measure BSS competencies. Studies were classified by article type and quality. The strongest assessment tools were mapped to both the IOM-defined BSS domains and to the BSS-relevant LCME standards and ACGME core competencies. Our findings can guide educators and educational researchers to both validated instruments for assessing BSS competencies in learners and the best evaluation designs and educational strategies to determine what may be needed in future educational efforts.

Method

Guiding principles

We used the Best Evidence Medical and Health Professional Education Guide8 in our systematic review. As such, we created two review groups, one to conduct the actual review (P.A.C., R.T.P., M.F.M., E.K.T.) and a second to act as a wider authorship and editorial advisory group (S.E.E., D.K.L., F.E.B., C.R.T., A.L., J.M.S.). We next specified our research question: What valid and reliable instruments have been developed to assess learner (medical student and resident) competencies specifically related to the social and behavioral sciences? We considered instruments that may be applicable to other health professions learners as well. Subsequently, we identified a practical, conceptual framework to identify those competencies specifically related to the social and behavioral sciences that would be of the greatest utility to educators and administrators. To accomplish this step, we analyzed the LCME accreditation requirements,5 which are divided into five sections: (1) institutional setting (e.g., governance and organizational environment); (2) educational program for the MD degree (e.g., objectives, learning environment and approach, structure in design and content); (3) medical students (e.g., student demography, admissions, student services); (4) faculty (e.g., qualifications, personnel, organization and governance); and (5) educational resources (e.g., faculty background and time, finances and facilities). As quality assessments of BSS competencies are needed in graduate medical education as well, we also included the ACGME core competencies (professionalism, medical knowledge, patient care, interpersonal skills and communication, systems-based practice, and practice-based learning and improvement) in the development of our conceptual framework.

To focus our review, we selected components from the LCME’s Section II: Educational Program for the MD Degree (ED) and focused specifically on educational content. (The LCME standards provided more detail than the ACGME milestones, and thus we relied heavily on the LCME verbiage as we refined our review.) We reviewed each of the content areas (ED-10 through ED-23), to identify those most relevant to the six IOM-defined BSS domains. Of the 13 possible components, we selected 6 BSS-relevant curriculum requirements (ED-10, ED-19 through ED-23) and 3 BSS-relevant integrative program requirements (ED-13 through ED-15), which provided the conceptual framework and core search terms for our literature review (see Table 1). We then weighted each selected LCME standard using a consensus process that included all authors but two (W.J.H., R.T.P.). The weights were assigned to reflect the strength of each standard’s relationship to each IOM-defined BSS domain, with no assigned weight indicating no relationship, + indicating a somewhat relevant relationship, ++ indicating a moderately relevant relationship, and +++ indicating a very relevant relationship.

Table 1
Table 1:
Conceptual Frameworks, and Assigned Weights, Used in a Systematic Review of the Literature on Tools to Assess Behavioral and Social Science Competencies in Medical Educationa

Search terms

We conducted a preliminary search for articles published between January 1, 2002 and March 1, 2014 using the databases OVID (Medline), CINAHL, PubMed, ERIC, Research and Development Resource Base (RDRB), SOCIOFILE, and PsycINFO. With guidance from a library science expert, terms used in the search included education, curriculum, course evaluation, students, teaching, competence, and program evaluation. These terms were further combined with the selected BSS-relevant LCME standards and the IOM-defined BSS domain keywords. See Supplemental Digital Appendix 1 (at http://links.lww.com/ACADMED/A328) for a sample search strategy with the limits and quotations used to search the OVID (Medline) database.

Inclusion/exclusion criteria

We sought to include articles reporting on some form of validity or reliability testing in more than one learning setting for BSS competency assessment measures. When articles were identified, we reviewed their reference lists for additional articles to consider. Two specially trained research assistants independently reviewed all titles and abstracts manually to assess appropriateness for inclusion. The two research assistants and one author (P.A.C.) initiated a consensus process, which continued until agreement among the group was reached on inclusion and exclusion according to title and abstract. Figure 1 outlines the process we undertook to search for and ultimately identify the final articles for detailed review. We excluded articles that did not cover the BSS competencies, that reported solely on learners’ satisfaction or self-reported or self-assessed competency, and that did not describe some form of validation of the assessment instrument.

Figure 1
Figure 1:
Literature search and article selection process for a systematic review of the literature, published between January 2002 and March 2014, on assessment tools used to evaluate behavioral and social science competencies in medical education.aPrimary reasons for rejection at title and abstract review included (1) lack of reporting on psychometric properties or validity or reliability testing in more than one learning setting; (2) measures that did not assess learner competency in one of the selected areas; (3) results that were based solely on learners’ satisfaction or self-reported or self-assessed competency; or (4) the curriculum being tested did not address the behavioral and social sciences (e.g., it focused on anatomy or surgical skills).bSome articles were rejected after partial data abstraction for multiple reasons and therefore were counted twice here.

Methods for data abstraction

The review group (P.A.C., R.T.P., M.F.M., E.K.T.) created an abstraction form using the following variables: the type of article, how it was found, if the article described a BSS learner competency and one or more measures of that competency, the quality of the instrument (does the study describe a form of validation of the instrument[s] used), if institutional review board (IRB) review was mentioned, the type of study, the site of the study, learner level of the participants, curriculum specialty, the BSS or competency measurement framework used, the curriculum format tested and for how many hours, how the competency was assessed, what was measured and when, and our classification of the strength of the instrument testing (as described below). The data abstraction form was tested with approximately 30 articles and was iteratively revised and retested to ensure that data capture during the abstraction process was accurate and that only applicable studies would be included. Selected members of the advisory group (F.E.B., A.L., J.M.S.) provided feedback and contributed to the consensus process as needed.

Methods for assessing instrument quality and study design

We focused on both previously validated BSS competency assessment instruments as well as new instruments, validated within the included article. Assessing the evidence derived from the included articles necessarily involved comingling assessments of the strength of the instrument itself and of the strength of the study design, as studies rarely focused on just one of these features. For example, a high-quality article was one that applied a validated BSS instrument (either from the published literature or the included article) using a rigorous study design, such as a randomized controlled trial. A low-quality article was one that applied an unvalidated measure of BSS competency and used a weak study design to measure the impact of the educational intervention, such as a postintervention survey of student satisfaction.

We categorized the level of evidence supporting each BSS competency assessment instrument and study design as weak, moderate, or strong. The weak evidence category included studies containing limited information on the validity and/or reliability of the evaluation instrument or a weak study design, such as a single-group pre–post design. The moderate evidence category included studies that provided some information about the reliability of the measures used but were not assessed rigorously, retested in the study sample, or had a moderately strong study design, such as a single-group historical cohort assessment. The strong evidence category included studies in which the evaluation instruments were tested rigorously in the study population and used a strong study design, such as a randomized controlled or crossover trial design.

Methods for article categorization, data entry, and analysis

Articles identified for data abstraction were classified into three categories: (1) instrument development with psychometric assessment only, defined as articles devoted to the statistical validation of a new or existing competency tool, such as a measure of physician empathy; (2) educational research, defined as articles that used a specific study design and BSS competency assessment tool to draw conclusions about a defined educational research question; and (3) curriculum evaluation, defined as articles that assessed specific curriculum features.

Three authors (P.A.C., R.T.P., M.F.M.) independently abstracted data from all articles that met the review criteria and then employed a rigorous consensus process for determining the final content of all abstractions during weekly consensus meetings. At these meetings, the variables from each article were discussed until consensus was reached. In one instance, the three authors could not come to a consensus. In this case, the larger advisory group was consulted for a final decision. The final consensus-based abstraction forms were entered into a database designed for this purpose. The data files were then checked and cleaned prior to analysis.

The Web-based system we used for database entry was a free and open-source application (LimeSurvey; Carsten Schmitz, Hamburg, Germany; https://www.limesurvey.org/en/), which was run on a secure and private server hosted at Oregon Health & Science University. Access to the system was limited to team members only, and its use allowed us to easily confirm which team members were reviewing which articles. Descriptive statistics were used to characterize the included articles (SPSS, IBM Corp., Armonk, New York).

Results

Our initial literature review identified 5,104 study titles and abstracts, many of which did not meet our criteria for further review (see Figure 1). Detailed title and abstract review along with searches of reference lists yielded 170 articles that we retrieved for full text review and data abstraction. Of these, we categorized 21 studies as instrument development with psychometric assessment only, 62 as educational research, and 87 as curriculum evaluation (see Supplemental Digital Appendix 2 at http://links.lww.com/ACADMED/A328). A more complete review during data abstraction revealed that 114 met our criteria for full abstraction (see Supplemental Digital Appendix 2).9–122 At the partial abstraction stage, most article exclusions occurred because instrument validation was absent; this exclusion was most common among articles in the curriculum evaluation category. Other exclusions occurred because the article or assessment tool described did not actually address a BSS competency.

The majority of articles mentioned IRB review (13 of 20 instrument development studies, 35 of 48 educational research studies, and 36 of 46 curricular evaluation studies) with most getting approval or exemption (see Supplemental Digital Appendix 2). Randomized study designs with or without controls were most common for educational research studies (23 of 48; 48%) compared with instrument development studies (1 of 20; 5%) and curricular evaluation studies (0 of 46; 0%), while prospective cohort pre–post designs were most common for curriculum evaluation studies (24 of 46; 52%) compared with educational research studies (6 of 48; 13%) and instrument development studies (1 of 20; 5%) (see Supplemental Digital Appendix 2). Validation using formal psychometric assessment was most common for instrument development (19 of 20; 95%) and educational research studies (25 of 48; 52%) compared with curriculum evaluation studies (17 of 46; 37%). We noted significant variability in the BSS frameworks and competency measures used to guide or evaluate the assessment instruments (see Supplemental Digital Appendix 2).

The most common BSS learner competency assessed across all types of articles was communication skills (see Supplemental Digital Appendix 3 at http://links.lww.com/ACADMED/A328). Cultural competence and behavior change counseling (which included motivational interviewing) also were commonly assessed, especially in educational research and curriculum evaluation studies. Using the ACGME competency language, interpersonal skills and communication (in > 90% of included articles), patient care (> 62% of articles), and medical knowledge (> 43% of articles) were most commonly assessed, with practice-based learning and improvement (≤ 10% of articles) and systems-based practice (≤ 10% of articles) less commonly assessed (see Supplemental Digital Appendix 3).

Validated instruments that assessed knowledge, attitudes, and skills were most commonly used to evaluate BSS competencies (65%–85%), with standardized patients assessing learners’ performance being the second most common (30%–44%) (see Supplemental Digital Appendix 3). Very few assessments were based on the direct observation of learners. Articles reporting on psychometric assessments typically provided strong evidence for the validity of the instrument (52%–90%)—16 articles mentioned testing done without specifying the validation method used. Validation by expert consensus was also reported (15%–42%), though less often than psychometric assessment.

We ranked 33 articles (29%) as contrib uting strong evidence to support BSS competency measures of communication skills, cultural competence, empathy/compassion, behavioral health coun seling, professionalism, and teamwork. Most of these were educational research studies (see Supplemental Digital Appendix 3). In Appendix 1, we present the tools we found to have the strongest evidence for validity and reliability as well as those with strong study or evaluation designs. We also map these tools to both the LCME standards and the ACGME core competencies.

In Supplemental Digital Appendix 4, we provide additional details regarding the included articles. In Supplemental Digital Appendixes 5 and 6, we describe the 62 articles (54%) that yielded moderate evidence in support of a BSS assessment tool and the 19 articles (16.7%) that yielded weak evidence, respectively. In Supplemental Digital Appendix 7, we map these articles to the BSS-relevant LCME standards. The majority (n = 65) mapped to the communication skills standard (ED-19), though all LCME accreditation requirements specific to or integrated with BSS competencies are represented in the included articles. Not all articles mapped to the IOM domains, however, with mind–body interactions in health and disease and health policy and economics being represented least often. All supplemental digital content is available at http://links.lww.com/ACADMED/A328.

Discussion

This systematic review is the first to identify valid and reliable instruments that have been developed to assess learner competencies specifically related to the social and behavioral sciences. Our aim was to provide the greatest utility to educators and administrators by linking these instruments with the LCME accreditation requirements and the ACGME core competencies. We learned that tools assessing communication skills were supported by the most rigorous validation and study design approaches. These tools included both written tests assessing knowledge, attitudes, and skills as well as assessments conducted with standardized patients. Overall, we found a paucity of assessments that used the direct observation of learners interacting with actual patients. Although such approaches are time and resource intensive, several articles support the value of direct observation in assessing learner competencies.123–126

Other high-quality assessments evalu ated cultural competence, empathy/compassion, behavior change counseling (e.g., motivational interviewing), and professionalism. However, only one high-quality assessment tool, described in a 2008 article, evaluated teamwork. As the national interprofessional education center127 has plans to conduct more rigorous instrument development and validation, additional work in this area might be forthcoming. We recommend that educators and educational researchers review the literature for established, validated tools to assess BSS competencies in their learners rather than reinventing the wheel. We found several well-validated tools that were used in only one study.

One of the most significant challenges in completing this review was distinguishing between the strength of the assessment instruments and the strength of the study designs. For example, the tool used might be very strong but the evaluation design was so weak that the strength of the measure could not overcome the weakness in the design in terms of drawing strong conclusions from the study findings. The strongest articles used well-validated tools combined with robust evaluation designs, such as randomized designs or historical cohort comparisons. We also included several rigorous qualitative studies in this review. These studies used strong qualitative research methodologies and well-validated instruments. Alternatively, moderate and weak articles used less rigorous approaches to instrument validation, and they had weak study designs that limited the conclusions that could be drawn. Not surprisingly, we found the most rigorous assessments in articles that described robust instrument development and testing. Although educational research articles were also likely to apply rigorous study designs, their validation approaches were not always as robust as those described in instrument development articles. This finding is worrisome as readers may draw conclusions from educational research that employs a strong evaluation design, when in reality the design is only as good as the measures used.

Even more concerning is our finding that curriculum assessment studies were the least likely to include validated instruments and frequently used weak research methods. Researchers cannot generate strong evidence for curricular approaches if the evaluation designs or assessment measures they use are suboptimal. Thus, an important finding from our work is the need for the use of well-validated instruments in quantitative and qualitative studies that represent both educational research and curriculum evaluation. One way to address this issue is to encourage medical school faculty to partner with investigators in either the school of education or public health/community medicine who have more experience with validating and using rigorous instruments and evaluation designs. Efforts to improve the dissemination of validated instruments and study strategies to promote their adoption also could prove beneficial.

The strengths of our study include the rigor with which we approached the consensus process across each phase of the review as well as the detailed information we abstracted from the articles that met our inclusion criteria. This process allowed us not only to consider the strength of the evidence for each included assessment tool but also to map specific studies and instruments to both the LCME accreditation requirements and the ACGME core competencies. By organizing our data in this way, we were able to provide a quick reference for educators who are looking for well-validated instruments to measure medical student competencies in the social and behavioral sciences at their own institutions.

Our systematic review has a number of limitations that arise from the breadth of the topic area, the lack of specificity in describing BSS competencies, and the related but distinctly different frameworks of the IOM, ACGME, and LCME. We identified the quality of the BSS tools and evaluation designs used in studies that were specific to different learner populations, such as undergraduate medical students. Nuances between the IOM, ACGME, and LCME frameworks should be taken into account when applying our findings from one distinct learner population to another. Although these nuances do exist, we also feel that the universality of the BSS competencies, as well as the need to assess them rigorously, outweighs any variance in learner level, and thus our findings can be of use in all learner populations. In addition, because of the breadth of the topic area and lack of specificity of the BSS competencies, the search terms we used (and their various Boolean operators) were complex and could be difficult to replicate. Although we searched the CINAHL, PsycINFO, and ERIC databases, our use of the IOM, ACGME, and LCME frameworks in data abstraction might have caused us to over-rely on the medical education literature. We did not include the EMBASE database, truncated search terms, or wildcards, which also limited our search. Next, we determined the quality scores by consensus using a subjective approach in assigning articles to strong, moderate, and weak categories. This process was challenging at times as some articles described high-quality instruments but weak study designs that affected our weighting of the evidence, while others described strong study designs but weak instruments that similarly affected our weighting. Finally, with the growth of peer evaluation and an emphasis on critical reflection in medical school curricula, we may have missed an important body of research because we excluded studies of self-reported competencies in the BSS domains; future reviews should consider addressing this gap.

In conclusion, we abstracted data from 114 articles, after reviewing a total of 5,104 identified studies. Of these, 33 (29%) yielded strong evidence to support BSS assessment tools that evaluated communication skills, cultural competence, empathy/compassion, behavioral health counseling, profession alism, and teamwork. Sixty-two (54%) articles yielded moderate evidence, and 19 (17%) yielded weak findings. In the future, more rigorous validation and testing of assessment tools as well as more robust evaluation designs are needed in both educational research and curriculum assessment. At the same time, the conceptual and content domains of BSS pedagogy deserve similar, careful definition and increased specificity so that educators can better assess medical student competencies in areas such as population health and social inequalities and their influence on health status, particularly with regard to gender, race/ethnicity, socioeconomic resources, and the social organization of health care.

Acknowledgments: The authors gratefully acknowledge the assistance of Claire Diener, summer student, Oregon Health & Science University, in conducting this literature review.

References

1. Institute of Medicine. Improving Medical Education: Enhancing the Behavioral and Social Science Content of Medical School Curricula. 2004.Washington, DC: National Academies Press.
2. McGinnis JM, Foege WH. Actual causes of death in the United States. JAMA. 1993;270:22072212.
3. Centers for Disease Control and Prevention. Behavioral risk factor surveillance system prevalence data. 2010. http://www.cdc.gov/brfss/annual_data/annual_2010.htm#information. Accessed November 13, 2015.
4. Association of American Medical Colleges Behavioral and Social Science Expert Panel. Behavioral and social science foundations for future physicians. https://www.aamc.org/download/271020/data/behavioralandsocialscience foundationsforfuturephysicians.pdf. Accessed November 13, 2015.
5. Liaison Committee on Medical Education. Standards for accreditation of medical education programs leading to the M.D. degree. 2013. http://www.lcme.org/publications/functions.pdf. AccessedDecember 7, 2015.
6. Greensboro Area Heath Education Center. ACGME core competency definitions. http://www.gahec.org/CME/Liasions/0)ACGME%20Core%20Competencies%20Definitions.htm. Accessed December 7, 2015.
7. Holmboe E, Carraccio C. The horizon in medical education: From milestones to EPAs to a new accreditation system. Health Resources and Services Administration webinar. http://bhpr.hrsa.gov/grants/medicine/technicalassistance/medicaleducation.pdf. Accessed December 7, 2015.
8. Hammick M, Dornan T, Steinert Y. Conducting a best evidence systematic review. Part 1: From idea to data coding. BEME guide no. 13. Med Teach. 2010;32:315.
9. Bosse HM, Schultz JH, Nickel M, et al. The effect of using standardized patients or peer role play on ratings of undergraduate communication training: A randomized controlled trial. Patient Educ Couns. 2012;87:300306.
10. Daeppen JB, Fortini C, Bertholet N, et al. Training medical students to conduct motivational interviewing: A randomized controlled trial. Patient Educ Couns. 2012;87:313318.
11. Gallagher TJ, Hartung PJ, Gerzina H, Gregory SW Jr, Merolla D. Further analysis of a doctor–patient nonverbal communication instrument. Patient Educ Couns. 2005;57:262271.
12. Guiton G, Hodgson CS, Delandshere G, Wilkerson L. Communication skills in standardized-patient assessment of final-year medical students: A psychometric study. Adv Health Sci Educ Theory Pract. 2004;9:179187.
13. Huntley CD, Salmon P, Fisher PL, Fletcher I, Young B. LUCAS: A theoretically informed instrument to assess clinical communication in objective structured clinical examinations. Med Educ. 2012;46:267276.
14. Iramaneerat C, Myford CM, Yudkowsky R, Lowenstein T. Evaluating the effectiveness of rating instruments for a communication skills assessment of medical residents. Adv Health Sci Educ Theory Pract. 2009;14:575594.
15. Fossli Jensen B, Gulbrandsen P, Benth JS, Dahl FA, Krupat E, Finset A. Interrater reliability for the four habits coding scheme as part of a randomized controlled trial. Patient Educ Couns. 2010;80:405409.
16. Fossli Jensen B, Gulbrandsen P, Dahl FA, Krupat E, Frankel RM, Finset A. Effectiveness of a short course in clinical communication skills for hospital doctors: Results of a crossover randomized controlled trial (ISRCTN22153332). Patient Educ Couns. 2011;84:163169.
17. Joshi R, Ling FW, Jaeger J. Assessment of a 360-degree instrument to evaluate residents’ competency in interpersonal and communication skills. Acad Med. 2004;79:458463.
18. Krupat E, Frankel R, Stein T, Irish J. The four habits coding scheme: Validation of an instrument to assess clinicians’ communication behavior. Patient Educ Couns. 2006;62:3845.
19. Lim BT, Moriarty H, Huthwaite M. “Being-in-role”: A teaching innovation to enhance empathic communication skills in medical students. Med Teach. 2011;33:e663e669.
20. Lurie SJ, Mooney CJ, Nofziger AC, Meldrum SC, Epstein RM. Further challenges in measuring communication skills: Accounting for actor effects in standardised patient assessments. Med Educ. 2008;42:662668.
21. Moulton CA, Tabak D, Kneebone R, Nestel D, MacRae H, LeBlanc VR. Teaching communication skills using the integrated procedural performance instrument (IPPI): A randomized controlled trial. Am J Surg. 2009;197:113118.
22. Rees C, Sheard C, Davies S. The development of a scale to measure medical students’ attitudes towards communication skills learning: The communication skills attitude scale (CSAS). Med Educ. 2002;36:141147.
23. Scheffer S, Muehlinghaus I, Froehmel A, Ortwein H. Assessing students’ communication skills: Validation of a global rating. Adv Health Sci Educ Theory Pract. 2008;13:583592.
24. Wouda JC, van de Wiel HB. The communication competency of medical students, residents and consultants. Patient Educ Couns. 2012;86:5762.
25. Yedidia MJ, Gillespie CC, Kachur E, et al. Effect of communications training on medical student performance. JAMA. 2003;290:11571165.
26. Crosson JC, Deng W, Brazeau C, Boyd L, Soto-Greene M. Evaluating the effect of cultural competency training on medical student attitudes. Fam Med. 2004;36:199203.
27. Kirby RL, Crawford KA, Smith C, Thompson KJ, Sargeant JM. A wheelchair workshop for medical students improves knowledge and skills: A randomized controlled trial. Am J Phys Med Rehabil. 2011;90:197206.
28. Wilkerson L, Fung CC, May W, Elliott D. Assessing patient-centered care: One approach to health disparities education. J Gen Intern Med. 2010;25(suppl 2):S86S90.
29. Austin EJ, Evans P, Magnus B, O’Hanlon K. A preliminary study of empathy, emotional intelligence and examination performance in MBChB students. Med Educ. 2007;41:684689.
30. Fields SK, Mahan P, Tillman P, Harris J, Maxwell K, Hojat M. Measuring empathy in healthcare profession students using the Jefferson Scale of Physician Empathy: Health provider–student version. J Interprof Care. 2011;25:287293.
31. Hojat M, Gonnella JS, Nasca TJ, Mangione S, Vergare M, Magee M. Physician empathy: Definition, components, measurement, and relationship to gender and specialty. Am J Psychiatry. 2002;159:15631569.
32. Peterson LN, Eva KW, Rusticus SA, Lovato CY. The readiness for clerkship survey: Can self-assessment data be used to evaluate program effectiveness? Acad Med. 2012;87:13551360.
33. Shapiro J, Morrison E, Boker J. Teaching empathy to first year medical students: Evaluation of an elective literature and medicine course. Educ Health (Abingdon). 2004;17:7384.
34. Mounsey AL, Bovbjerg V, White L, Gazewood J. Do students develop better motivational interviewing skills through role-play with standardised patients or with student colleagues? Med Educ. 2006;40:775780.
35. Prochaska JJ, Gali K, Miller B, Hauer KE. Medical students’ attention to multiple risk behaviors: A standardized patient examination. J Gen Intern Med. 2012;27:700707.
36. Spollen JJ, Thrush CR, Mui DV, Woods MB, Tariq SG, Hicks E. A randomized controlled trial of behavior change counseling education for medical students. Med Teach. 2010;32:e170e177.
37. Truncali A, Lee JD, Ark TK, et al. Teaching physicians to address unhealthy alcohol use: A randomized controlled trial assessing the effect of a Web-based module on medical student performance. J Subst Abuse Treat. 2011;40:203213.
38. Crossley J, Vivekananda-Schmidt P. The development and evaluation of a professional self identity questionnaire to measure evolving professional self-identity in health and social care students. Med Teach. 2009;31:e603e607.
39. De Haes JC, Oort FJ, Hulsman RL. Summative assessment of medical students’ communication skills and professional attitudes through observation in clinical practice. Med Teach. 2005;27:583589.
40. Noble LM, Kubacki A, Martin J, Lloyd M. The effect of professional skills training on patient-centredness and confidence in communicating with patients. Med Educ. 2007;41:432440.
41. Youngblood P, Harter PM, Srivastava S, Moffett S, Heinrichs WL, Dev P. Design, development, and evaluation of an online virtual emergency department for training trauma teams. Simul Healthc. 2008;3:146153.
42. Bachmann C, Barzel A, Roschlaub S, Ehrhardt M, Scherer M. Can a brief two-hour interdisciplinary communication skills training be successful in undergraduate medical education? Patient Educ Couns. 2013;93:298305.
43. Bombeke K, Van Roosbroeck S, De Winter B, et al. Medical students trained in communication skills show a decline in patient-centred attitudes: An observational study comparing two cohorts during clinical clerkships. Patient Educ Couns. 2011;84:310318.
44. Feeley TH, Anker AE, Soriano R, Friedman E. Using standardized patients to educate medical students about organ donation. Commun Educ. 2010;59:249262.
45. Hausberg MC, Hergert A, Kröger C, Bullinger M, Rose M, Andreas S. Enhancing medical students’ communication skills: Development and evaluation of an undergraduate training program. BMC Med Educ. 2012;12:16.
46. Joekes K, Noble LM, Kubacki AM, Potts HW, Lloyd M. Does the inclusion of “professional development” teaching improve medical students’ communication skills? BMC Med Educ. 2011;11:41.
47. Tiuraniemi J, Läärä R, Kyrö T, Lindeman S. Medical and psychology students’ self-assessed communication skills: A pilot study. Patient Educ Couns. 2011;83:152157.
48. Turan S, Elcin M, Uner S, Odabasi O, Sayek I, Senemoglu N. The impact of clinical visits on communication skills training. Patient Educ Couns. 2009;77:4247.
49. Claramita M, Majoor G. Comparison of communication skills in medical residents with and without undergraduate communication skills training as provided by the Faculty of Medicine of Gadjah Mada University. Educ Health (Abingdon). 2006;19:308320.
50. Mauksch L, Farber S, Greer HT. Design, dissemination, and evaluation of an advanced communication elective at seven U.S. medical schools. Acad Med. 2013;88:843851.
51. Hulsman RL, Mollema ED, Hoos AM, de Haes JC, Donnison-Speijer JD. Assessment of medical communication skills by computer: Assessment method and student experiences. Med Educ. 2004;38:813824.
52. Hulsman RL, Peters JF, Fabriek M. Peer-assessment of medical communication skills: The impact of students’ personality, academic and social reputation on behavioural assessment. Patient Educ Couns. 2013;92:346354.
53. Ishikawa H, Hashimoto H, Kinoshita M, Yano E. Can nonverbal communication skills be taught? Med Teach. 2010;32:860863.
54. Lie DA, Bereknyei S, Vega CP. Longitudinal development of medical students’ communication skills in interpreted encounters. Educ Health (Abingdon). 2010;23:466.
55. Karabilgin OS, Vatansever K, Caliskan SA, Durak Hİ. Assessing medical student competency in communication in the pre-clinical phase: Objective structured video exam and SP exam. Patient Educ Couns. 2012;87:293299.
56. Shapiro SM, Lancee WJ, Richards-Bentley CM. Evaluation of a communication skills program for first-year medical students at the University of Toronto. BMC Med Educ. 2009;9:11.
57. Simmenroth-Nayda A, Weiss C, Fischer T, Himmel W. Do communication training programs improve students’ communication skills?—a follow-up study. BMC Res Notes. 2012;5:486.
58. Yeap R, Beevi Z, Lukman H. Evaluating IMU communication skills training programme: Assessment tool development. Med J Malaysia. 2008;63:244246.
59. LeBlanc VR, Tabak D, Kneebone R, Nestel D, MacRae H, Moulton CA. Psychometric properties of an integrated assessment of technical and communication skills. Am J Surg. 2009;197:96101.
60. Leung KK, Wang WD, Chen YY. Multi-source evaluation of interpersonal and communication skills of family medicine residents. Adv Health Sci Educ Theory Pract. 2012;17:717726.
61. Wagner JA, Pfeiffer CA, Harrington KL. Evaluation of online instruction to improve medical and dental students’ communication and counseling skills. Eval Health Prof. 2011;34:383397.
62. Bachner YG, Castel H, Kushnir T. Psychosocial abilities of first-year medical students participating in a clinical communication course. Natl Med J India. 2012;25:8082.
63. King AE, Conrad M, Ahmed RA. Improving collaboration among medical, nursing and respiratory therapy students through interprofessional simulation. J Interprof Care. 2013;27:269271.
64. Koponen J, Pyörälä E, Isotalus P. Comparing three experiential learning methods and their effect on medical students’ attitudes to learning communication skills. Med Teach. 2012;34:e198e207.
65. Hook KM, Pfeiffer CA. Impact of a new curriculum on medical students’ interpersonal and interviewing skills. Med Educ. 2007;41:154159.
66. Mason SR, Ellershaw JE. Preparing for palliative medicine; evaluation of an education programme for fourth year medical undergraduates. Palliat Med. 2008;22:687692.
67. Schulz C, Möller MF, Seidler D, Schnell MW. Evaluating an evidence-based curriculum in undergraduate palliative care education: Piloting a phase II exploratory trial for a complex intervention. BMC Med Educ. 2013;13:1.
68. Aper L, Reniers J, Koole S, Valcke M, Derese A. Impact of three alternative consultation training formats on self-efficacy and consultation skills of medical students. Med Teach. 2012;34:e500e507.
69. Christner JG, Stansfield RB, Schiller JH, Madenci A, Keefer PM, Pituch K. Use of simulated electronic mail (e-mail) to assess medical student knowledge, professionalism, and communication skills. Acad Med. 2010;85(10 suppl):S1S4.
70. Crandall SJ, George G, Marion GS, Davis S. Applying theory to the design of cultural competency training for medical students: A case study. Acad Med. 2003;78:588594.
71. Jarris YS, Bartleman A, Hall EC, Lopez L. A preclinical medical student curriculum to introduce health disparities and cultivate culturally responsive care. J Natl Med Assoc. 2012;104:404411.
72. Mihalic AP, Morrow JB, Long RB, Dobbie AE. A validated cultural competence curriculum for US pediatric clerkships. Patient Educ Couns. 2010;79:7782.
73. Hoang L, LaHousse SF, Nakaji MC, Sadler GR. Assessing deaf cultural competency of physicians and medical students. J Cancer Educ. 2011;26:175182.
74. Genao I, Bussey-Jones J, St George DM, Corbie-Smith G. Empowering students with cultural competence knowledge: Randomized controlled trial of a cultural competence curriculum for third-year medical students. J Natl Med Assoc. 2009;101:12411246.
75. Weissman JS, Betancourt J, Campbell EG, et al. Resident physicians’ preparedness to provide cross-cultural care. JAMA. 2005;294:10581067.
76. Cushing A, Evans D, Hall A. Medical students’ attitudes and behaviour towards sexual health interviewing: Short- and long-term evaluation of designated workshops. Med Teach. 2005;27:422428.
77. Kim S, Spielberg F, Mauksch L, et al. Comparing narrative and multiple-choice formats in online communication skill assessment. Med Educ. 2009;43:533541.
78. Wouda JC, Zandbelt LC, Smets EM, van de Wiel HB. Assessment of physician competency in patient education: Reliability and validity of a model-based instrument. Patient Educ Couns. 2011;85:9298.
79. Hoover CR, Wong CC, Azzam A. From primary care to public health: Using problem-based learning and the ecological model to teach public health to first year medical students. J Community Health. 2012;37:647652.
80. McGarvey E, Peterson C, Pinkerton R, Keller A, Clayton A. Medical students’ perceptions of sexual health issues prior to a curriculum enhancement. Int J Impot Res. 2003;15(suppl 5):S58S66.
81. Tang TS, Fantone JC, Bozynski ME, Adams BS. Implementation and evaluation of an undergraduate sociocultural medicine program. Acad Med. 2002;77:578585.
82. Lim BT, Moriarty H, Huthwaite M, Gray L, Pullon S, Gallagher P. How well do medical students rate and communicate clinical empathy? Med Teach. 2013;35:e946e951.
83. Shapiro J, Rucker L, Boker J, Lie D. Point-of-view writing: A method for increasing medical students’ empathy, identification and expression of emotion, and insight. Educ Health (Abingdon). 2006;19:96105.
84. Mercer SW, Maxwell M, Heaney D, Watt GC. The consultation and relational empathy (CARE) measure: Development and preliminary validation and reliability of an empathy-based consultation process measure. Fam Pract. 2004;21:699705.
85. Fernández-Olano C, Montoya-Fernández J, Salinas-Sánchez AS. Impact of clinical interview training on the empathy level of medical students and medical residents. Med Teach. 2008;30:322324.
86. Bass PF III, Stetson BA, Rising W, Wesley GC, Ritchie CS. Development and evaluation of a nutrition and physical activity counseling module for first-year medical students. Med Educ Online. 2004;9:23.
87. Bell K, Cole BA. Improving medical students’ success in promoting health behavior change: A curriculum evaluation. J Gen Intern Med. 2008;23:15031506.
88. Brown RL, Pfeifer JM, Gjerde CL, Seibert CS, Haq CL. Teaching patient-centered tobacco intervention to first-year medical students. J Gen Intern Med. 2004;19(5 pt 2):534539.
89. Han PK, Joekes K, Elwyn G, et al. Development and evaluation of a risk communication curriculum for medical students. Patient Educ Couns. 2014;94:4349.
90. Kosowicz LY, Pfeiffer CA, Vargas M. Long-term retention of smoking cessation counseling skills learned in the first year of medical school. J Gen Intern Med. 2007;22:11611165.
91. Martino S, Haeseler F, Belitsky R, Pantalon M, Fortin AH 4th. Teaching brief motivational interviewing to year three medical students. Med Educ. 2007;41:160167.
92. McEvoy M, Schlair S, Sidlo Z, Burton W, Milan F. Assessing third-year medical students’ ability to address a patient’s spiritual distress using an OSCE case. Acad Med. 2014;89:6670.
93. White LL, Gazewood JD, Mounsey AL. Teaching students behavior change skills: Description and assessment of a new motivational interviewing curriculum. Med Teach. 2007;29:e67e71.
94. Edwardsen EA, Morse DS, Frankel RM. Structured practice opportunities with a mnemonic affect medical student interviewing skills for intimate partner violence. Teach Learn Med. 2006;18:6268.
95. Haeseler F, Fortin AH 6th, Pfeiffer C, Walters C, Martino S. Assessment of a motivational interviewing curriculum for year 3 medical students using a standardized patient case. Patient Educ Couns. 2011;84:2730.
96. Prochaska JJ, Teherani A, Hauer KE. Medical students’ use of the stages of change model in tobacco cessation counseling. J Gen Intern Med. 2007;22:223227.
97. Stolz D, Langewitz W, Meyer A, et al. Enhanced didactic methods of smoking cessation training for medical students—a randomized study. Nicotine Tob Res. 2012;14:224228.
98. Boenink AD, de Jonge P, Smal K, Oderwald A, van Tilburg W. The effects of teaching medical professionalism by means of vignettes: An exploratory study. Med Teach. 2005;27:429432.
99. Kittmer T, Hoogenes J, Pemberton J, Cameron BH. Exploring the hidden curriculum: A qualitative analysis of clerks’ reflections on professionalism in surgical clerkship. Am J Surg. 2013;205:426433.
100. Abadel FT, Hattab AS. Patients’ assessment of professionalism and communication skills of medical graduates. BMC Med Educ. 2014;14:28.
101. Kelly M, O’Flynn S, McLachlan J, Sawdon MA. The clinical conscientiousness index: A valid tool for exploring professionalism in the clinical undergraduate setting. Acad Med. 2012;87:12181224.
102. Brock D, Abu-Rish E, Chiu CR, et al. Interprofessional education in team communication: Working together to improve patient safety. Postgrad Med J. 2013;89:642651.
103. Yuasa M, Nagoshi M, Oshiro-Wong C, Tin M, Wen A, Masaki K. Standardized patient and standardized interdisciplinary team meeting: Validation of a new performance-based assessment tool. J Am Geriatr Soc. 2014;62:171174.
104. Baribeau DA, Mukovozov I, Sabljic T, Eva KW, deLottinville CB. Using an objective structured video exam to identify differential understanding of aspects of communication skills. Med Teach. 2012;34:e242e250.
105. Chisholm A, Hart J, Mann K, Peters S. Development of a behaviour change communication tool for medical students: The “tent pegs” booklet. Patient Educ Couns. 2014;94:5060.
106. Endres J, Laidlaw A. Micro-expression recognition training in medical students: A pilot study. BMC Med Educ. 2009;9:47.
107. Kalet AL, Mukherjee D, Felix K, et al. Can a Web-based curriculum improve students’ knowledge of, and attitudes about, the interpreted medical interview? J Gen Intern Med. 2005;20:929934.
108. Lee CA, Chang A, Chou CL, Boscardin C, Hauer KE. Standardized patient-narrated Web-based learning modules improve students’ communication skills on a high-stakes clinical skills examination. J Gen Intern Med. 2011;26:13741377.
109. Mukohara K, Kitamura K, Wakabayashi H, Abe K, Sato J, Ban N. Evaluation of a communication skills seminar for students in a Japanese medical school: A non-randomized controlled study. BMC Med Educ. 2004;4:24.
110. Schildmann J, Kupfer S, Burchardi N, Vollmann J. Teaching and evaluating breaking bad news: A pre–post evaluation study of a teaching intervention for medical students and a comparative analysis of different measurement instruments and raters. Patient Educ Couns. 2012;86:210219.
111. Chun MB, Young KG, Honda AF, Belcher GF, Maskarinec GG. The development of a cultural standardized patient examination for a general surgery residency program. J Surg Educ. 2012;69:650658.
112. Sanchez NF, Rabatin J, Sanchez JP, Hubbard S, Kalet A. Medical students’ ability to care for lesbian, gay, bisexual, and transgendered patients. Fam Med. 2006;38:2127.
113. Yacht AC, Suglia SF, Orlander JD. Evaluating an end-of-life curriculum in a medical residency program. Am J Hosp Palliat Care. 2006;23:439446.
114. Chun MB, Deptula P, Morihara S, Jackson DS. The refinement of a cultural standardized patient examination for a general surgery residency program. J Surg Educ. 2014;71:398404.
115. Fletcher I, Leadbetter P, Curran A, O’Sullivan H. A pilot study assessing emotional intelligence training and communication skills with 3rd year medical students. Patient Educ Couns. 2009;76:376379.
116. Goldsmith J, Wittenberg-Lyles E, Shaunfield S, Sanchez-Reilly S. Palliative care communication curriculum: What can students learn from an unfolding case? Am J Hosp Palliat Care. 2011;28:236241.
117. Loke S-K, Blyth P, Swan J. In search of a method to assess dispositional behaviours: The case of Otago Virtual Hospital. Australas J Educ Technol. 2012;28:441458.
118. Rosenthal S, Howard B, Schlussel YR, et al. Humanism at heart: Preserving empathy in third-year medical students. Acad Med. 2011;86:350358.
119. Van Winkle LJ, Fjortoft N, Hojat M. Impact of a workshop about aging on the empathy scores of pharmacy and medical students. Am J Pharm Educ. 2012;76:9.
120. Foley KL, George G, Crandall SJ, Walker KH, Marion GS, Spangler JG. Training and evaluating tobacco-specific standardized patient instructors. Fam Med. 2006;38:2837.
121. Haist SA, Wilson JF, Pursley HG, et al. Domestic violence: Increasing knowledge and improving skills with a four-hour workshop using standardized patients. Acad Med. 2003;78(10 suppl):S24S26.
122. Margalit AP, Glick SM, Benbassat J, Cohen A, Katz M. Promoting a biopsychosocial orientation in family practice: Effect of two teaching programs on the knowledge and attitudes of practising primary care physicians. Med Teach. 2005;27:613618.
123. Hanson JL, Bannister SL, Clark A, Raszka WV Jr. Oh, what you can see: The role of observation in medical student education. Pediatrics. 2010;126:843845.
124. Fromme HB, Karani R, Downing SM. Direct observation in medical education: A review of the literature and evidence for validity. Mt Sinai J Med. 2009;76:365371.
125. Gigante J. Direct observation medical trainees. Pediatr Ther. 2013;3:e118.
126. Holmboe ES. Faculty and the observation of trainees’ clinical skills: Problems and opportunities. Acad Med. 2004;79:1622.
127. National Center for Interprofessional Practice and Education Web site. https://nexusipe.org. Accessed November 13, 2015.

Appendix 1

Details of the 33 Assessment Tools With “Strong” Evidence of Validity or Reliability Included in a Systematic Review of the Literature on Tools to Assess Behavioral and Social Science Competencies in Medical Education, 2002 to 2014

Copyright © 2016 by the Association of American Medical Colleges