Communication skills in medicine comprise a vast set of abilities, including gathering information, building relationships, demonstrating empathy, and/or explaining and planning.1 Exhibiting good communication skills has been stated as an essential professional attribute in modern medical practice.2 Effective communication enhances patient empowerment and thus increases healthy behaviors among patients in both acute and long-term care.3–5 As a consequence, medical faculties are increasing efforts to promote the learning of communication skills by medical students.
For the last 20 years, simulation-based training has become a core educational strategy for learning professional skills in healthcare, including communication abilities.6 Simulation is a generic term that refers to an artificial representation of a real-world situation aiming to achieve educational goals through experiential learning. Simulation-based medical education consists of any educational activity in which simulation replicates clinical scenarios.7 Simulation programs for training communication skills have been named differently according to who plays the role of the patient. According to the Healthcare Simulation Dictionary (https://www.ssih.org/Dictionary), the terms “simulated patient” (SimP) and “standardized patient” are interchangeable, and both refer to programs in which the role of the patient is played by an actor, a lay person, or a real patient. By contrast, the term “peer role-play” (PRP) typically refers to programs in which learners also play the role of the patient.8 In simulation-based education, the educational objective of role-play training is to rehearse situations to improve learner's abilities to face with similar situations in clinical practice.7
Compared with other simulation programs for training communication skills, PRP exhibits both advantages and disadvantages. Because the role of the patient is not played by a predefined and trained participant, sessions may be less structured in terms of scenario and role-play training, and they may thus result in increased between-group variability in how scenarios unfold and in the precise educational content. This can be detrimental to homogenous learning between students and can also raise methodological and ethical issues when applied for research or students' evaluation. By contrast, PRP workshops are generally easier to set up within medical faculties. In addition, it has been hypothesized that when playing the role of patients, students could more easily develop some important communication skills, such as empathy.9
Evaluating and comparing the educational and cost-related value of the different simulation-based methods for training communication skills is thus crucial for stakeholders of medical faculties. In a previous systematic review of the literature, the entire body of evidence on “SimP” programs was assessed and reviewed.10 This work revealed that medical students globally considered training programs based on SimPs to be a valuable and effective method for learning communication skills. However, the authors also concluded that there was limited evidence of how this translates into patient outcomes, and at this stage, they found no indication of cost-benefit advantages for this type of training compared with former methods.10
With a similar systematic approach, we gathered and reviewed all the studies that pertained to PRP among medical students. The selected studies were scored with a specific tool for assessing educational research. The objective of this review was to determine the effectiveness of PRP for improving communication skills in medical students based on each Kirkpatrick evaluation level11 [ie, (a) reaction, (b) learning, (c) behavior, and (d) results], as well as cost-related outcomes.
We conducted a systematic review and narrative synthesis of studies that assessed PRP for improving the communication skills of medical students. The details on how the systematic review was conducted and reported follow the PRISMA (Preferred Reporting Items for Systematic Reviews and Meta-Analyses) Statement12 (see table, Supplemental Digital Content 1, which addresses in details the PRISMA checklist for this review, http://links.lww.com/SIH/A472).
A systematic literature search was performed on April 15, 2018, using the MedLine, PsycInfo, and ERIC (Education Resources Information Center) databases, and it was based on the following key word algorithm: (“role play” OR “role playing”) AND (“medical” OR “medicine”) AND (“undergraduate” OR “students” OR “education” OR “training”). We also reviewed the bibliographical references of the selected articles and the literature reviews on this topic, with the aim of including missing records in our selection of studies.
After elimination of duplicates, a preliminary selection on titles was conducted separately by 2 authors (A.G. and B.R.). In case of disagreement about a particular study, the involvement of a third author (P.L.) was used to determine whether the study was retained for the selection of abstracts. A second selection step (selection of abstracts) was undertaken by the same authors using a similar process. A third selection step (selection of full texts) identified the studies ultimately included in the systematic review (see the flow chart in Fig. 1).
The studies selected needed to meet the following inclusion criteria: (a) studies pertaining to medical students, any year of study; (b) studies based on simulation training programs; (c) studies assessing communication skills with quantitative methods; and (d) studies written in English. The exclusion criteria were as follows: (a) qualitative studies, literature reviews, letters, or opinion articles; (b) studies exclusively assessing learners other than medical students (eg, senior physicians, residents or trainees, or nurses); and (c) studies based on methods other than simulation or training abilities other than communication skills. The decision to exclude qualitative studies was consistent with the tools used for evaluating the selected studies (see hereinafter), which were fit for quantitative studies only. In addition, qualitative studies were not well matched with our main objective, which was to determine the effectiveness of PRP for teaching communication skills. In mixed studies (ie, studies mixing quantitative and qualitative methods), we selected and reviewed only the quantitative data.
For all the studies included, the following features were assessed: type of study (observational, interventional, other), type and number of participants, study aim, study protocol, and main results. This review applied Kirkpatrick's 4-level training evaluation model, which is widely used in health education programs.11 The outcomes levels were categorized as follows:
- -Level 1, “Reaction” (the learner's satisfaction with the learning experience, training/assessment methods, materials, quality of instruction, organization);
- -Level 2, “Learning” (the learner's modification of knowledge, skills, attitudes and perceptions), with 5 subcategories: 2a, “knowledge”; 2b, “skill”; 2c, “attitude”; 2d, “confidence”; and 2e, “commitment”
- -Level 3, “Behavior” (a behavioral change in transfer to a real-life environment or workplace);
- -Level 4, “Results” (improvement in the health outcomes of a patient as a result of educational program).
For each study, the different Kirkpatrick levels of educational research11 were assessed and noted.
We assessed the methodological quality of the included studies using the “Medical Education Research Quality Instrument” (MERSQI), which was developed to evaluate medical education research studies.13 This 10-item instrument consists of the following 6 domains: (a) study design, (b) sampling, (c) type of data, (d) validity of evaluation, (e) data analysis, and (f) outcome levels. The maximum score for each level is 3, and the minimum score is 0 or 1. The total possible range is 5 to 18, with a higher score indicating higher quality. The full blank form of the MERSQI can be freely accessed on the Internet (https://scs.msu.edu/sa/wfp/files/Quality%20Instrument.pdf).
Regarding the assessment of the instruments used in the different studies, we collected information on their validity in the study as well as previous published validation studies. An instrument was considered to have published validity data if at least one aspect of validation (reliability, validity, or responsiveness) was published in an educational study. The presence or absence of published validity data is reported for each scale, including references for validation studies. The processes of data extraction and quality assessment were undertaken independently by 2 different authors (M.D. and R.R., S.C. and M.D., or R.R. and S.C.) for each study, with intervention of a third author (A.G. or B.R.) in case of discrepancies.
Because of the wide variation in the design, population, and objective of the studies reviewed, it was not possible to perform a meta-analysis. Nevertheless, each time it was possible, we represented the findings of each study in the form of effect sizes. The calculation of effect sizes was based on the standardized mean difference measure, following Cohen method.14 With this definition, Cohen d values of 0.2, 0.5, and 0.8 are considered small, moderate, and large, respectively.15 We attempted to contact authors for missing data but were rarely successful. Thus, for effectiveness outcomes, we qualitatively described the main findings of the studies, which were first classified regarding their assessment objective according to the Kirkpatrick level.11 The results were discussed according to their type (ie, uncontrolled with only a posttest assessment, uncontrolled study with pretest and posttest assessment, nonrandomized controlled study, and randomized controlled study), and the quality assessment was based on the MERSQI score or subscores.
For cost-related outcomes, the results were classified according to the type of cost-related evaluation. Cost-minimization analysis measures the difference between alternative interventions, supposing that interventions are equally effective and differ only in costs. Regarding the simultaneous assessment of effectiveness and costs, outcomes are combined with cost measures to obtain a ratio calculation expressed as the assessment of the outcomes divided by the net change in costs. For each study, data used for cost-related evaluation are detailed and qualitatively discussed.
Study Identification and Selection
Twenty-two studies were finally included in the systematic review (Fig. 1). The details of these studies are reported in a specific table (see Supplemental Digital Content 2, which details the location, design, participants' features, type of intervention, main outcomes, main findings, MERSQI score, and Kirkpatrick levels of all the studies included in the systematic review, http://links.lww.com/SIH/A473). Below are the main characteristics of the studies, as well as the specific set of findings for each Kirkpatrick level and the findings on cost-related assessment, which are not considered in the Kirkpatrick classification.11
Descriptive Characteristics of the Studies
Regarding quality assessment, the mean ± SD MERSQI score was 10.04 ± 2.4, and the median MERSQI score was 10.25 (Q1 = 8, Q2 = 12.75) out of a maximum of 18.13 Detailed MERSQI scores for each study are available in Supplemental Digital Content 3 (http://links.lww.com/SIH/A474, MERSQI scores). The number of participants ranged from 21 to 351 students. A large majority of the studies (n = 17, 73.9%) concerned the 3 first years of medical studies. The simulated situations were, by order of frequency, situations of (a) general practice (n = 13); (b) pediatrics (n = 4); (c) addiction medicine (n = 2); and (d) other (n = 4), ie, psychiatry, cardiology, gynecology, or technical work that involved specific training on communication aspects.
Among the 22 studies included in the review, 10 were randomized controlled trials,16–26 and 1 was a nonrandomized controlled trial.27 One of these studies led to 2 separate publications.17,18 Uncontrolled studies (n = 11) comprised either preintervention and postintervention assessments or only postintervention assessment.28–38 Two randomized studies assessed both the compared efficacy and compared cost-efficacy ratio of PRP versus SimP.17,18
Within the controlled studies, the control group consisted of the following: (a) a “no-intervention” group (ie, a group with merely a theoretical teaching but with no practical training program16–21,24,27); (b) a SimP group in which the role of the patient was played by a trained real patient16–18,23; or (c) a SimP group in which the role of the patient was played by a professional actor.25,26
Main Findings on “Reaction” (Kirkpatrick Level 1)
Fifteen studies16–18,25–28,30–38 assessed the students' reaction regarding the use of PRP for training communication skills. In this respect, a self-report questionnaire was used in all these studies, with the sole exception of Koponen et al,26 who performed a mixed study with both a self-report questionnaire and a qualitative survey using open questions. Overall, PRP was deemed realistic and acceptable by most students16,34,36,38 and was perceived as effective for training and improving communication skills.17,18,26,27,30,33,36,37 When directly comparing with “SimP”-based training, students expressed more interest and more future prospects for the development of SimP programs,16 but they reported more feeling closer to the patients' standpoint in PRP training.18
Main Findings on “Learning” (Kirkpatrick Level 2)
Twelve studies assessed the effectiveness of PRP training with regard to learning, with an overall quality assessment ranging from 10.5 to 14 according to the MERSQI.13 The specific results on the “learning” aspects of PRP on communication are listed in a specific table (see Supplemental Digital Content 4, which details the type of study, nature of the comparison group, when appropriate, MERSQI score, types of questionnaires and measures used, and main findings with effect sizes, http://links.lww.com/SIH/A475).
Three studies assessed “knowledge” gains, 7 studies explored the impact on “skills,” 2 studies assessed the effects of PRP on “attitudes,” and 3 studies assessed the impact on “confidence.”
- Knowledge (Kirkpatrick Level 2a)
Two uncontrolled studies using a pretest-posttest design with a self-report questionnaire without published validity data28,29 and a randomized controlled study21 found a significant effect on communication knowledge using a self-report questionnaire with published validity data (P = 0.038, Cohen d = 0.318).
- Skills (Kirkpatrick Level 2b)
Seven controlled studies17–20,22–24,27 assessed the impact of PRP training on communication skills. Compared with a control group with no education on communication skills, groups that had PRP training exhibited significantly enhanced communication skills in 5 studies.17–20,22,27 The evaluation of students was conducted during a SimP session by tutors in 4 studies17–19,22,27 and using a self-report questionnaire in the fifth study.20 No difference was found in communication skills in the study of Windish et al,24 as they were evaluated by standardized patients themselves during a SimP session.
Papadakis et al23 found no difference between PRP and SimP sessions in inducing changes in communication skills among students, but they used an instrument without published validity data. In contrast, Bosse et al17 found that PRP was more effective in fostering skills change, with a moderate size effect (Cohen d = 0.71) compared with that for SimP, using an instrument with published validity data. In one study, the difference was found to be more important after adding e-learning sessions before PRP.19
- Attitude (Kirkpatrick Level 2c)
The impact of PRP training on attitude was reported in 2 studies25,26 from the same team, both comparing PRP training with SimP training and theater training with a professional actor. The “Communication Skills Attitude Scale”39 was used to assess students' attitudes and showed significant improvement in all 3 groups, without between-group differences.25,26
- Confidence (Kirkpatrick Level 2d)
The impact of PRP training on confidence or self-efficacy was reported in 3 studies. Two uncontrolled studies28,29 using pretest and posttest design both included questions related to confidence in a self-report questionnaire without published validity data, and both showed a significant improvement in communication confidence-related items after PRP training. Bosse et al17,18 used a self-efficacy assessment questionnaire without published validity data. This questionnaire compared PRP training, SimP training, and a control group without communication training. Peer role-play training and SimP training significantly improved students' level of self-efficacy compared with that of the control group (Cohen d = 0.673 and 0.32, respectively), but there was no significant difference between PRP training and SimP training.
- Commitment (Kirkpatrick Level 2e)
No study that explored the effects of PRP-based training sessions on students' commitment was found (ie, the intention to implement the acquired knowledge and know-how in clinical practice).
Main Findings on “Behaviors” (Kirkpatrick Level 3)
No study that explored the application of learning when students are confronted with real patients was found.
Main Findings on “Results” (Kirkpatrick Level 4)
No study that explored the practical results of PRP-based training sessions for clinical populations was found.
Main Findings on Cost-Related Aspects
Two high-quality studies included a cost evaluation of the training sessions.17,23 Both studies were controlled randomized trials comparing PRP with SimP sessions.
Papadakis et al23 performed a cost-minimization analysis that included the costs of training and intervention for actors, as well as administrative costs. The results were standardized on the basis of a 100-student class. They reported global costs that were 5 times greater for SimP than for PRP.23 Bosse et al17 conducted a cost-effectiveness analysis in which effectiveness was assessed by a standardized evaluation during an objective structured clinical evaluation. Costs included both SimP training and employment costs, as well as tutors' costs. The ratio between effectiveness and costs was found to be better for PRP than for SimP (0.74 and 0.45,17 respectively).
The main aim of this review was to gather and evaluate studies that assessed the effectiveness of PRP-based training programs for developing communication skills among medical students, with a particular focus on the studies that compared PRP with other educational methods, including other simulation-based teaching methods, such as SimP.
In the end, 22 studies met the inclusion criteria. For comparison, a similar review on SimP found 60 articles.10 No study or survey has every investigated which types of simulation-based education programs are the more common among medical faculties, in particular for teaching communication skills. Should PRP be more widespread than SimP in medical faculties, the discrepancy found in the number of studies assessing each type of techniques would be an appealing finding.
Students' feedback on PRP regarding acceptability features was globally good in all the studies included in our systematic review, although controlled studies revealed that there was no significant difference in acceptability when comparing PRP with other simulation-based methods, particularly SimP-based training. Overall, simulation-based education programs rely on entertaining and proactive methods,40 which can explain why a good level of acceptability is generally reported by medical students, including when used for technical skills.7 This favorable appreciation may be one of the factors that has positioned simulation as a central pedagogic medium for current and future medical education.
Another important finding of the review is that PRP appeared globally effective for changing healthcare knowledge and role-play attitudes (Kirkpatrick Level 2) among students. However, only one study explored the persistence of that change at 6 months, and additional investigations are thus warranted to determine whether the communication skills acquired or improved persist in the long term among medical students. Of note, the results did not support the notion that SimP is more effective than PRP for developing communication abilities. However, the number of studies that directly compared the 2 approaches was limited, and no definite conclusion between the compared effectiveness of the 2 approaches can be drawn at this stage. Peer role-play was also found to improve empathy more than SimP in one study, which suggests that the active attempt to take on a role may promote a degree of learner insight into the experience of being a patient. However, much more research is needed to adequately appraise the differential impacts on students' learning between playing and not playing the role of the patient. Unfortunately, no study has explored the behavioral changes in real-life patients (Kirkpatrick Level 3) and the results for clinical outcomes (Kirkpatrick Level 4) using PRP-based educational programs. However, the lack of practical assessment in real life is a common issue in all simulation-based programs in medical education,41 including SimP training.10
The 2 studies that compared the cost and cost-effectiveness balance between SimP and PRP suggested that PRP is less costly than SimP.17,23 Additional direct cost and cost-effectiveness comparisons between SimP and PRP will be very important in the upcoming years. If future studies confirm that PRP-based programs provide results regarding changes in knowledge, attitudes, or even clinical outcomes that are similar to the results for SimP, but with reduced overall costs, this may have important consequences for the strategic decisions made by medical faculties in terms of simulation development programs.
Several limitations of this review should be acknowledged. As previously highlighted, the small number of studies, in particular comparative studies, is a major limitation of this review. In addition, there was high variability in the PRP programs, including in the theme, group size, and number of sessions. Moreover, the overall quality of the studies based on the MERSQI13 was also very heterogeneous, whereas the SimP-controlled studies were generally those with the highest quality level. The results of studies that only used a “no education” group as the control group were particularly limited in terms of possible conclusions. Another limiting factor was the lack of consensus between researchers in the definition of domains of communication skills, the heterogeneous choice of outcome measurement tools, and the lack of validity of several checklists. This led us to undertake a qualitative review of the results,42 as no real meta-analysis was possible. Future research should thus promote comparative and homogeneous designs that focus on the sustained impact of simulation-based programs on communication skills among medical students. Moreover, as both Kirkpatrick levels 3 and 4 have not been assessed in the current body of literature, exploring the impact of PRP on real-life situations is warranted in future studies.
Another point that may be a source of criticism on this review is the fact that it did not consider studies that pertained to residents or more senior trainees. However, it can be argued that these types of individuals constitute a distinct type of population who display increased practical experience with patients and in whom issues regarding the improvement of communication skills may be very different from what should be expected in younger medical students. These arguments can also be applied to graduate physicians and paramedical professions, and opposite findings for PRP and SimP could be observed in the future depending on the age, experience, and technical abilities of the learners.
The current body of literature provides very preliminary evidence that PRP could be as effective as and possibly less costly than SimP to teach communication skills among medical students. Additional studies are needed to confirm this statement and to extend it to other types of medical and paramedical learners.
1. Kurtz S, Draper J, Silverman J. Teaching and Learning Communication Skills in Medicine
. Boca Raton, Florida: CRC Press; 2004.
2. Simpson M, Buckman R, Stewart M, et al. Doctor-patient communication: the Toronto consensus statement. BMJ
3. Bleetman A, Sanusi S, Dale T, Brace S. Human factors and error prevention in emergency medicine. Emerg Med J EMJ
4. Siu J, Maran N, Paterson-Brown S. Observation of behavioural markers of non-technical skills in the operating room and their relationship to intra-operative incidents. Surg J R Coll Surg Edinb Irel
5. Uramatsu M, Fujisawa Y, Mizuno S, Souma T, Komatsubara A, Miki T. Do failures in non-technical skills contribute to fatal medical accidents in Japan? A review of the 2010-2013 National Accident Reports. BMJ Open
6. Gordon M, Hill E, Stojan JN, Daniel M. Educational interventions to improve handover in health care: an updated systematic review. Acad Med J Assoc Am Med Coll
7. Nestel D, Kelly M, Jolly B, Watson M. Healthcare Simulation Education: Evidence,
Theory & Practice. Hoboken, New Jersey: Wiley-Blackwell; 2017.
8. Collins P, RM Harden J. AMEE Medical Education Guide No. 13: real patients, simulated patients and simulators in clinical examinations. Med Teach
9. Rasasingam D, Kerry G, Gokani S, Zargaran A, Ash J, Mittal A. Being a patient: a medical Student's perspective. Adv Med Educ Pract
10. Kaplonyi J, Bowles K-A, Nestel D, et al. Understanding the impact of simulated patients on health care learners' communication skills: a systematic review. Med Educ
11. Kirkpatrick DL, Kirkpatrick JD. Evaluating Training Programs: The Four Levels
. 3rd ed.., [Nachdr.].; San Francisco: Berrett-Koehler [u.a.]; 2010.
12. Moher D, Liberati A, Tetzlaff J, Altman DG, PRISMA Group. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: The PRISMA Statement. PLoS Med
13. Reed DA, Cook DA, Beckman TJ, Levine RB, Kern DE, Wright SM. Association between funding and quality of published medical education research. JAMA
14. Borenstein M, Hedges L, Higgins J. Introduction to Meta-Analysis
. Hoboken, New Jersey: John Wileys & Sons; 2011.
15. Cohen J. A power primer. Psychol Bull
16. Bosse HM, Nickel M, Huwendiek S, Jünger J, Schultz JH, Nikendei C. Peer role-play
and standardised patients in communication training: a comparative study on the student perspective on acceptability, realism, and perceived effect. BMC Med. Educ
2010;10:27. doi: 10.1186/1472-6920-10-27.
17. Bosse HM, Nickel M, Huwendiek S, Schultz JH, Nikendei C. Cost-effectiveness of peer role play and standardized patients in undergraduate communication training. BMC Med. Educ
2015;15:183. doi: 10.1186/s12909-015-0468-1.
18. Bosse HM, Schultz JH, Nickel M, et al. The effect of using standardized patients or peer role play on ratings of undergraduate communication training: a randomized controlled trial. Patient Educ Couns
19. Gartmeier M, Bauer J, Fischer MR, et al. Fostering professional communication skills of future physicians and teachers: effects of e-learning with video cases and role-play
. Instr Sci
20. Kiosses VN, Tatsioni A, Dimoliatis ID, Hyphantis T. “Empathize with me, doctor!” Medical undergraduates training project: development, application, six-months follow-up. J Educ Train Stud
21. Lau KCJ, Stewart SM, Fielding R. Preliminary evaluation of “interpreter” role plays in teaching communication skills to medical undergraduates. 5.
22. Nikendei C, Kraus B, Schrauth M, et al. Integration of role-playing into technical skills training: a randomized controlled trial. Med Teach
23. Papadakis MA, Croughan-Minihane M, Fromm LJ, Wilkie HA, Ernster VL. A comparison of two methods to teach smoking-cessation techniques to medical students. Acad Med J Assoc Am Med Coll
24. Windish DM, Price EG, Clever SL, Magaziner JL, Thomas PA. Teaching medical students the important connection between communication and clinical reasoning. J Gen Intern Med
25. Koponen J, Pyörälä E, Isotalus P. Comparing three experiential learning methods and their effect on medical Students' attitudes to learning communication skills. Med Teach
26. Koponen J, Pyörälä E, Isotalus P. Communication skills for medical students: results from three experiential methods. Simul Gaming
27. Tayem Y, Altabtabaei A, Mohamed M, et al. Competence of medical students in communicating drug therapy: value of role-play
demonstrations. Indian J. Pharmacol
28. Cushing AM, Jones A. Evaluation of a breaking bad news course for medical students. Med Educ
29. Roman B, Borges N, Morrison AK. Teaching motivational interviewing skills to third-year psychiatry clerkship students. Acad Psychiatry
30. Coonar AS, Dooley M, Daniels M, Taylor RW. The use of role-play
in teaching medical students obstetrics and gynaecology. Med Teach
31. Fertleman C, Gibbs J, Eisen S. Video improved role play for teaching communication skills. Med Educ
32. King J, Hill K, Gleason A. All the world's a stage: evaluating psychiatry role-play
based learning for medical students. Australas Psychiatry
33. Lavanya SH, Kalpana L, Veena RM, Bharath Kumar VD. Role-play
as an educational tool in medication communication skills: Students' perspectives. Indian J Pharmacol
34. Luttenberger K, Graessel E, Simon C, Donath C. From board to bedside – training the communication competences of medical students with role plays. BMC Med Educ
2014;14:135. Available at: https://doi.org/10.1186/1472-6920-14-135
. Accessed February 4, 2020.
35. Mills JK, Dalleywater WJ, Tischler V. An assessment of student satisfaction with peer teaching of clinical communication skills. BMC Med Educ
2014;14:217. doi: 10.1186/1472-6920-14-217.
36. Mumtaz S, Zahra T. Role-play
as a learning modality in Pakistan. Clin Teach
37. Saab BR, Usta J, Major S, Musharrafieh U, Ashkar K. Communication skills in a Lebanese medical school: from movie theaters to medical classrooms. Fam Med
38. Shield RR, Tong I, Tomas M, Besdine RW. Teaching communication and compassionate care skills: an innovative curriculum for pre-clerkship medical students. Med Teach
39. Rees C, Sheard C, Davies S. The development of a scale to measure medical Students' attitudes towards communication skills learning: the communication skills attitude scale (CSAS). Med Educ
40. White EJ, Lewis JH, McCoy L. Gaming science innovations to integrate health systems science into medical education and practice. Adv Med Educ Pract
41. Okuda Y, Bryson EO, DeMaria S, et al. The utility of simulation in medical education: what is the evidence? Mt Sinai J Med N Y
42. Setyonugroho W, Kennedy KM, Kropmans TJB. Reliability and validity of OSCE checklists used to assess the communication skills of undergraduate medical students: a systematic review. Patient Educ Couns