Saxena, Varun MD; O’Sullivan, Patricia S. EdD; Teherani, Arianne PhD; Irby, David M. PhD; Hauer, Karen E. MD
Medical schools strive to ensure that students achieve clinical skills competence. Unfortunately, faculty members rarely observe students’ clinical skills during clerkships or employ evaluation methods that specifically assess students’ achievement of clinical skills.1–3 In addition, evaluating clinicians lack confidence in their ability to identify incompetent students or feel reluctant to fail poorly performing students.4,5 To help assess clinical competence, most U.S. medical schools now administer a comprehensive clinical skills assessment using standardized patients (SPs).6 Multiple studies have confirmed the validity of comprehensive assessment scores in assessing students’ clinical competence and have even shown that these scores correlate with subsequent performance during internship and further training.7–11
Students failing any part of the comprehensive assessment should receive remediation to improve their failing performance. Failing students manifest problems in either cognitive skills (history-taking, physical examination, clinical knowledge, and clinical reasoning), noncognitive skills (professionalism and communication), or both.12 Although small, single-institution studies have reported strategies designed to remediate deficits in the cognitive and noncognitive domains,13–17 approaches to remediation vary widely.13–18 To our knowledge, no studies identify optimal remediation strategies, nor are there guidelines regarding how to remediate a particular skill deficit. Educators are thus left to select remediation strategies on an ad hoc basis. However, certain skill areas may be particularly challenging to remediate. A growing body of evidence suggests that remediating noncognitive problems is more challenging and less effective than remediation of cognitive problems.19–21
This study explores the strategies educators use to remediate medical students after a comprehensive clinical skills assessment. We sought to describe how educators verify comprehensive assessment scores, who identifies failing students’ needed area (or areas) of remediation, which remediation activities they employ, and the relationship between strategies selected and deficits identified. We also examined educators’ confidence in different remediation strategies. Finally, we assessed which programs chose to retest their students after remediation.
This was a cross-sectional national survey study.
Participants and setting
In a previous survey of medical school curriculum deans,6 we identified 71 medical schools that conduct a comprehensive clinical skills assessment. We defined a comprehensive assessment as a multistation, cross-disciplinary exam outside of a single clerkship, involving SPs. We asked the deans to name the persons responsible for standard setting and remediation for their school’s comprehensive assessment, and the 71 individuals named comprised the current study population. The University of California, San Francisco, institutional review board approved the study.
All five investigators developed the survey items based on our prior work12,22 and pilot tested the survey with two local medical educators with experience in SP exams. Survey questions addressed whether a school’s remediation process included elements of remediation that we previously identified, including verifying scores, diagnosing a failing student’s deficits, conducting remediation activities, and retesting.22 We asked participants to respond for each of six previously identified skill domains: history-taking, physical examination, clinical knowledge, clinical reasoning, professionalism, and communication.12 All questions in our survey regarding score verification, diagnosis, remediation activities, and retesting had multiple-choice options that we formulated from the common themes reported in our previous work.12,22 This development resulted in a 28-item survey with four sections on remediation practices after the comprehensive assessment.
We asked participants to identify their medical school name, their title and role in their school’s comprehensive assessment and number of years in that role, and their gender. We characterized the participating institutions according to public versus private status (UnivSource), geographic region as defined by the Association of American Medical Colleges (AAMC), and number of enrolled students (as reported by the AAMC).
The 28-item survey contained four sections:
1. Verification and diagnosis. We asked participants whether and how their institution verifies a failing student’s scores, how learner deficits are diagnosed, and who assigns remediation activities.
2. Use of remediation activities. We provided participants with a list of eight remediation activities based on our prior work22 and asked how often (1 = never, 2 = less than half the time, 3 = about half the time, 4 = more than half the time, and 5 = almost always or always) they used each activity overall and for each of the six skill domains (history-taking, physical examination, clinical knowledge, clinical reasoning, professionalism, and communication). A remediation effort score summed the eight activities for use overall and for each of the six skill domains, resulting in seven scores ranging from 8 to 40. As shown in Table 1, we grouped similar remediation activities. Group 1, clinical activities, included preceptorships, remediation within clerkships, and specially scheduled clinical rotations. Group 2, independent study, included Web-based modules, readings, and independent review of exam recordings. Group 3, precepted video review, included precepted review of exam recordings. Group 4, organized group activities, included practice with SPs, skills workshops, seminars, and group discussions. We created average scores for each activity group by averaging the data on the use of individual activities within the group.
3. Confidence and satisfaction. Participants indicated their confidence in nine aspects of their school’s remediation process on a five-point Likert scale (1 = strongly disagree to 5 = strongly agree) and their satisfaction with their remediation process with a single item on the same five-point scale.
4. Remediation outcomes. We asked participants whether they retested failing students and, if so, to answer five questions regarding their retest, or, if no, why not. Participants indicated whether or not they used any of four options (retest pass rate, report by remediation faculty, UMSLE Step 2 Clinical Skills pass rate, or other method) to measure the success of their school’s remediation process.
In the fall of 2007, we distributed the survey through Zoomerang, an online survey software system. We invited participants via an e-mail that explained that we were conducting a study funded by the Josiah Macy, Jr. Foundation to evaluate clinical skills remediation at medical schools nationally and that we were asking the individual to participate because a curriculum dean or equivalent leader in their medical school had identified that person as the institution’s key contact person on this topic. We sent nonresponders up to five follow-up e-mails. We did not provide any incentives to participants for completing the survey.
We computed descriptive statistics for institutional and respondent characteristics and for survey items on remediation. We used chi-square analysis and the Fisher exact test to assess demographic differences between respondents’ and nonrespondents’ institutions as well as between respondents’ institutions and all U.S. medical schools accredited by the Liaison Committee on Medical Education (LCME). We compared remediation usage scores overall and across all six skill domains for each strategy using a repeated-measures analysis of variance, and we repeated this for the four activity groups. We set the P value at .05. Post hoc tests following significant F tests used a family-wise type I error rate of .05. We calculated bivariate correlations between remedial activity usage and confidence and satisfaction items. Finally, we conducted a series of univariate tests examining variables potentially related to whether or not institutions retested, and we used the nine variables significant at the .1 level, including demographic, confidence, and remediation effort variables, in a logistic regression analysis. We analyzed data using SPSS version 15.0 (SPSS Inc., Chicago, Illinois).
Fifty-three of 71 (74.6%) participants responded. Twenty-two (41.5%) respondents were associate or vice deans, and 24 (45.3%) were directors of clinical skills education. Of the survey respondents, 37 (69.8%) supervised their institution’s comprehensive assessment, and 26.4% (14) specifically supervised remediation. On average, respondents had been in their comprehensive assessment role for 5.7 years (standard deviation [SD] 4.1). There were no significant differences between respondent schools and all U.S. medical schools accredited by the LCME in terms of public versus private status, geographic region, or number of enrolled students.
1. Verification and diagnosis.
Nearly all (49/53, or 93%) respondent schools verified a failing student’s comprehensive assessment scores before remediation, most commonly by reviewing exam score reports (47/49, or 96%), watching all or a portion of the failing student’s exam video (28/49, or 57%), or meeting with the failing student (24/49, or 49%). The persons responsible for diagnosing failing students’ needed area (or areas) of remediation and creating remediation plans were a comprehensive assessment director (17/53, or 32.1%), dean (13/53, or 25%), or member of an exam or academic oversight committee (10/53, or 19%).
2. Use of remediation activities.
Table 1 shows the frequency of use of the eight remediation activities overall as well as for remediating the six skill domain problems. Respondents used precepted video review most frequently for remediation overall (mean 3.40, SD 1.41) and in all six skill domains, generally “about half the time,” and they used this remediation activity significantly more often than the other three activity groups for remediating all skill domains except physical examination. Respondents used clinical activities least frequently overall and in all six domains, at a level generally around “less than half the time.” Except for the skills of clinical knowledge and professionalism, respondents used clinical activities significantly less than any other activity group. For clinical knowledge, they used clinical activities and organized group activities, both around “less than half the time.” For professionalism, respondents also used clinical activities and independent study around “less than half the time.”
3. Confidence and satisfaction.
Confidence and satisfaction data are summarized in Table 2. Respondents were most confident with their comprehensive assessment scores and their ability to diagnose a learner’s deficits. Confidence in remediating each of the six skill domains was below the “agree” level. Respondents were least confident in remediating professionalism problems (mean 2.96, SD 1.06). Confidence in the effectiveness of the school’s remediation process was significantly positively correlated to overall effort toward remediation (correlation coefficient = 0.37, P < .01) and remediation effort scores in each skill domain (history-taking effort score, 0.32; physical examination effort score, 0.34; clinical knowledge effort score, 0.27; clinical reasoning effort score, 0.35; professionalism effort score, 0.28; communication effort score, 0.31). Confidence in the effectiveness of the remediation process also correlated significantly (P < .05) with overall use of clinical activities (0.32) and precepted video review (0.28).
Figure 1 shows the correlation between confidence in remediation of the six skill domains and use of the four remediation activity groups. There were no significant correlations between confidence in remediating history-taking and clinical knowledge and use of the activity groups. Respondents who used organized group activities more frequently were significantly more confident in remediating physical examination (correlation coefficient = 0.30, P = .01), clinical reasoning (0.27, P = .03), and communication problems (0.34, P < .01). Using precepted video review significantly negatively correlated to confidence in remediating professionalism problems (−0.50, P < .001).
4. Retesting after remediation.
Thirty-nine (74%) respondent institutions retested failing students after the comprehensive assessment. Nine of these 39 (23%) required only a subset of all failing students to retest, based on a decision by the comprehensive assessment leadership (6/9, or 67%), remediating faculty (4/9, or 44%), and/or dean (1/9, or 11%). On average, comprehensive assessment retests included 5.2 (SD 3.2) cases of “the same” (38/39, or 97%) difficulty as the original exam. Institutions scored the retest using multiple methods including checklists completed by SPs (37/39, or 95%), checklists completed by faculty (11/39, or 28%), and/or global assessment by a faculty observer (10/39, or 26%). Pass/fail determinations were criterion referenced (26/39, or 67%), normative (16/39, or 41%), and/or decided by a faculty observer (13/39, or 33%).
Of the 14 (26%) institutions that did not retest, most reported cost (9/14, or 64%), ability of remediation faculty to determine competence (7/14, or 43%), and/or other curricular priorities (6/14, or 43%) as reasons for not retesting.
Institutions used the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills (CS) pass rate (31/53, or 59%) and/or a summary report by remediation faculty (20/53, or 38%) to measure the success of their remediation, but six (6/53, or 11%) did not assess the outcomes of their remediation. Of the 39 institutions that required retests, 26 (67%) used the retest pass rate to measure the success of remediation.
In a logistic regression examining variables related to an institution’s likelihood of conducting a comprehensive assessment retest, using the USMLE Step 2 CS exam pass rate as a measure of success of the remediation process negatively predicted the likelihood of retesting (β = −3.02, standard error [SE] = 1.25, P = .02). Schools that were more satisfied with their remediation process were more likely to retest (β = 1.09, SE = 0.50, P = .03).
Our study examined how medical schools respond to failing performance on a comprehensive clinical skills assessment with the hopes of developing remediation guidelines. Participants described their procedures for diagnosing and remediating critical clinical and communication skills deficits. Schools are investing significant administrative resources in analyzing why a student fails to demonstrate competence, by having senior educators review exam score reports and videos and meet with failing students. Once a failing student’s deficient skill domains have been identified, educators may select a combination of clinical activities, independent study, precepted video review, and organized group activities for remediating the student’s deficits. However, confidence and satisfaction in the remediation process are low compared with confidence in the ability to identify failing students and their deficits.
Effective remediation requires the use of multiple modalities,23 especially for clinical reasoning24,25 and communication deficits.26–28 A multimodality approach to remediation helps address the variety of failing students’ needs while also maintaining a balance between different remediation styles.24,25,29 Although our study shows that institutions use some remediation activities more than others, institutions reported using all activities to some extent regardless of the skill domain being remediated. For example, respondents used precepted video review most frequently for remediating communication deficits. However, they also used organized group activities frequently, and the use of these activities positively correlated with confidence in the institution’s ability to address communication problems. Our findings may indicate that medical educators understand the need to use multiple methods as part of an intensive remediation curriculum to remediate a particular student’s skill deficit, an approach that has been shown to be effective for communication problems.14
Alternatively, our findings do not necessarily mean that individual students receive multiple interventions for remediation; educators may apply different strategies for different individuals. Whereas the few students who fail because of simple technique errors can remediate with a single intervention, most who fail the comprehensive clinical assessment exhibit deficits in multiple skill domains12 and likely require multimodality remediation. Students undergoing remediation often harbor additional, previously undiagnosed skill deficits.30 Thus, although limited situations exist in which single-strategy remediation may be appropriate, most students likely require a diverse, multimethod approach to remediation.
A precepted video review provides failing students the opportunity to receive formative feedback and develop contextual understanding of their failure. Establishing the context, motivation, and insight necessary to target students’ efforts toward their failing skill domains is a central ingredient for successful remediation.13,16,29,31,32 Although our findings show that institutions used precepted video review only half the time, precepted video review was the most commonly employed remediation strategy for all six skill domains. A meeting for a precepted video review early in the remediation process aids students in overcoming denial regarding their failure.32 Addressing denial upfront and encouraging students to accept responsibility for their failure both increase the likelihood that a failing student will make constructive changes.33 Moreover, an early meeting can help educators decide which additional remediation activities are appropriate for a particular failing student.34
Remediation strategies that employ a format similar to the usual curricular activity may be more effective than strategies that introduce unfamiliar curricular formats to students.23 Remediation through familiar activities facilitates students’ ability to build new knowledge and skills by layering information on previously experienced contexts.35 For remediating history-taking and physical examination problems, institutions reported frequent use of organized group activities, including practice with SPs and skills workshops, seminars, or group discussions. This format is almost identical to the way that history-taking and physical examination are taught during the preclinical years.36–39 Authentic activities where students feel invited to participate and engage enhance learning.40 Congruence between the standard and remediation curricula may explain why use of organized group activities to remediate history-taking deficits correlated with confidence in remediating these problems in our study.
Independent study activities were the next most frequently employed strategies for remediating knowledge and clinical reasoning deficits, just as independent reading is a cornerstone of preclinical education. However, we found that use of organized group activities positively correlated with confidence, which implies that interactive formats are more effective for remediating clinical reasoning deficits than independent study. Organized group activities have already been shown to be successful in remediating clinical knowledge and reasoning deficits after core clerkships.15 Organized group activities may better address clinical reasoning deficits because they provide students with immediate feedback and the opportunity to build on experience by actively applying knowledge, both of which are important in developing clinical reasoning skills.35,41,42 Organized group activities allow failing students to observe and interact with similarly performing peers, a process that promotes reflection on personal strengths and weaknesses.43,44
There are likely multiple explanations for the relatively low reported confidence in remediating all six skill domains. When schools dedicate more resources to remediation, as shown by using clinical activities or attaining higher remediation effort scores, they report higher confidence in their remediation process. Greater investment in the remediation process likely enhances the quality and ideally the efficacy of remediation. However, most institutions are using the convenient and inexpensive precepted video review most frequently and the authentic and resource-intensive clinical activities least frequently, which implies that confidence is low perhaps because of inadequate resources for remediation. Most institutions implement their comprehensive clinical assessment in late in the third year or early in the fourth year of medical school.6 Confidence in remediation may be low because educators discover failing students too late for remediation, particularly with competing demands of the residency application process for students. Only about half of schools require passing a retest for graduation, and, among those institutions that retest, criteria to define which students retest and what constitutes a passing score are not standardized across schools.45 In addition, the emphasis on assessment of competence and the requirement for students to pass the UMSLE Step 2 CS exam are relatively new.46 Medical schools are only recently requiring comprehensive skills assessments6 and are therefore newly confronted with students requiring remediation who previously had gone undetected. But, as assessment continues to drive change, medical schools may effectively infuse new, more intensive remediation curricula and thus develop increased confidence in the process.
Educators may report low confidence in remediating due to the lack of a standard policy on remediation. Institutional decisions about remediation activities are not driven by large outcome studies. The importance of a uniform policy to govern remediation of underprepared students has been emphasized in the college setting.47 Neither a comparable policy nor a norm has been established for remediating medical students who fail the comprehensive skills assessment. However, significant challenges exist in creating a “one size fits all” remediation policy given the disparities in students’ skill deficits,12 variety of contributing factors for failure,32 and benefits of individualized remediation and mentorship.14,34
Our study shows that educators are uncertain about which techniques are effective for remediating professionalism problems, as shown by the low use of any remediation activities for unprofessional behavior despite widespread endorsement of the importance of professionalism in medical education.21 Comprehensive clinical assessments are not typically designed to screen for professionalism deficits, and unprofessional examination behavior may come as a surprise.12 Moreover, remediating students with professionalism deficits poses unique difficulties because these students may lack insight and be least accurate in self-assessing their performance.31,48 A comprehensive remediation program for professionalism that combines nonpunitive support with the possibility of sanctions may improve some students’ deficiencies while identifying some that cannot be remediated.49 Regardless of the barriers, remediating professionalism problems is critical for students’ future performance,21 and earlier identification with effective remediation strategies is needed.
This study has several limitations. Respondents came from only 53 U.S. medical schools. However, we achieved a high response rate from individuals experienced in remediation. We assessed confidence rather than actual outcomes of remediation. However, there is currently too much variability in remediation practices to determine outcomes. There may be other explanations for the lack of high confidence in existing remediation processes that we did not ask about. We did not correlate use of remediation activities with subsequent student performance.
At a time when medical schools are beginning to feel comfortable instituting comprehensive clinical assessments to ensure clinical competence and are discovering the challenges of addressing failing scores, this study provides an in-depth characterization of the status of remediation strategies across the country. On the basis of this study and the literature, we believe a number of recommendations regarding remediation after the comprehensive clinical assessment have emerged. An effective remediation program requires significant resource investment and should offer multiple strategies in remediating a failing student. A precepted video review can serve as an important initial meeting with a failing student. Remediation strategies that employ formats familiar to students may be particularly effective. Organized group activities have numerous pedagogic advantages over independent study and may be particularly useful for remediating history-taking, physical examination, clinical knowledge, and clinical reasoning deficits. Further research and collaboration are needed to establish the optimal protocol for remediating students who fail the comprehensive clinical assessment. More studies exploring the role of clinical activities in remediation, analyzing the cost-benefit of using multiple remediation strategies for a single failing student, exploring student perceptions about the usefulness of different remediation strategies, and investigating new techniques for early identification and remediation of student professionalism deficits should be undertaken. Finally, undergraduate medical education programs should collaborate to implement best practices regarding remediation.
The authors thank the Josiah Macy, Jr. Foundation for funding; Kathleen M. Kerr for assistance with survey development and distribution; Josephine Tan for assistance with literature searching; and the participating schools.
1 Kassebaum DG, Eaglen RH. Shortcomings in the evaluation of students’ clinical skills and behaviors in medical school. Acad Med. 1999;74:842–849.
2 Howley LD, Wilson WG. Direct observation of students during clerkship rotations: A multiyear descriptive study. Acad Med. 2004;79:276–280.
3 York NL, Niehaus AH, Markwell SJ, Folse JR. Evaluation of students’ physical examination skills during their surgery clerkship. Am J Surg. 1999;177:240–243.
4 Speer AJ, Solomon DJ, Fincher RM. Grade inflation in internal medicine clerkships: Results of a national survey. Teach Learn Med. 2000;12:112–116.
5 Dudek NL, Marks MB, Regehr G. Failure to fail: The perspectives of clinical supervisors. Acad Med. 2005;80(10 suppl):S84–S87.
6 Hauer KE, Hodgson CS, Kerr KM, Teherani A, Irby DM. A national study of medical student clinical skills assessment. Acad Med. 2005;80(10 suppl):S25–S29.
7 Boulet JR, McKinley DW, Norcini JJ, Whelan GP. Assessing the comparability of standardized patient and physician evaluations of clinical skills. Adv Health Sci Educ Theory Pract. 2002;7:85–97.
8 Stillman PL, Regan MB, Swanson DB, et al. An assessment of the clinical skills of fourth-year students at four New England medical schools. Acad Med. 1990;65:320–326.
9 Tamblyn R, Abrahamowicz M, Dauphinee WD, et al. Association between licensure examination scores and practice in primary care. JAMA. 2002;288:3019–3026.
10 Taylor ML, Blue AV, Mainous AG 3rd, Geesey ME, Basco WT Jr. The relationship between the National Board of Medical Examiners’ prototype of the Step 2 clinical skills exam and interns’ performance. Acad Med. 2005;80:496–501.
11 Hasnain M, Connell KJ, Downing SM, Olthoff A, Yudkowsky R. Toward meaningful evaluation of clinical competence: The role of direct observation in clerkship ratings. Acad Med. 2004;79(10 suppl):S21–S24.
12 Hauer KE, Teherani A, Kerr KM, O’Sullivan PS, Irby DM. Student performance problems in medical school clinical skills assessments. Acad Med. 2007;82(10 suppl):S69–S72.
13 Faustinella F, Orlando PR, Colletti LA, Juneja HS, Perkowski LC. Remediation strategies and students’ clinical performance. Med Teach. 2004;26:664–665.
14 Lin CT, Barley GE, Cifuentes M. Personalized remedial intensive training of one medical student in communication and interview skills. Teach Learn Med. 2001;13:232–239.
15 Magarian GJ, Campbell SM. A tutorial for students demonstrating adequate skills but inadequate knowledge after completing a medicine clerkship at the Oregon Health Sciences University. Acad Med. 1992;67:277–278.
16 Beckert L, Wilkinson TJ, Sainsbury R. A needs-based study and examination skills course improves students’ performance. Med Educ. 2003;37:424–428.
17 Bennett AJ, Roman B, Arnold LM, Kay J, Goldenhar LM. Professionalism deficits among medical students: Models of identification and intervention. Acad Psychiatry. 2005;29:426–432.
18 Rosenblatt MA, Schartel SA. Evaluation, feedback, and remediation in anesthesiology residency training: A survey of 124 United States programs. J Clin Anesth. 1999;11:519–527.
19 Hunt DD, Carline J, Tonesk X, Yergan J, Siever M, Loebel JP. Types of problem students encountered by clinical teachers on clerkships. Med Educ. 1989;23:14–18.
20 Murden RA, Way DP, Hudson A, Westman JA. Professionalism deficiencies in a first-quarter doctor–patient relationship course predict poor clinical performance in medical school. Acad Med. 2004;79(10 suppl):S46–S48.
21 Papadakis MA, Loeser H, Healy K. Early detection and evaluation of professionalism deficiencies in medical students: One school’s approach. Acad Med. 2001;76:1100–1106.
22 Hauer KE, Teherani A, Irby DM, Kerr KM, O’Sullivan PS. Approaches to medical student remediation after a comprehensive clinical skills examination. Med Educ. 2008;42:104–112.
23 Boylan H, Bonham BS, White SR. Developmental and remedial education in postsecondary education. New Dir High Educ. 1999;27:87–101.
24 Ark TK, Brooks LR, Eva KW. The benefits of flexibility: The pedagogical value of instructions to adopt multifaceted diagnostic reasoning strategies. Med Educ. 2007;41:281–287.
25 Eva KW, Hatala RM, Leblanc VR, Brooks LR. Teaching from the clinical reasoning literature: Combined reasoning strategies help novice diagnosticians overcome misleading information. Med Educ. 2007;41:1152–1158.
26 Zick A, Granieri M, Makoul G. First-year medical students’ assessment of their own communication skills: A video-based, open-ended approach. Patient Educ Couns. 2007;68:161–166.
27 Yedidia MJ, Gillespie CC, Kachur E, et al. Effect of communications training on medical student performance. JAMA. 2003;290:1157–1165.
28 Dowell J, Dent JA, Duffy R. What to do about medical students with unsatisfactory consultation skills? Med Teach. 2006;28:443–446.
29 Levin H, Calcagno J. Remediation in the Community College: An Evaluator’s Perspective. New York, NY: Community College Research Center, Teachers College; 2007.
30 Teherani A, O’Sullivan PS, Lovett M, Hauer KE. Categorization of unprofessional behaviors identified during administration of and remediation after a comprehensive clinical performance examination using a validated professionalism framework. Medical Teacher, In Press.
31 Srinivasan M, Hauer KE, Der-Martirosian C, Wilkes M, Gesundheit N. Does feedback matter? Practice-based learning for medical students after a multi-institutional clinical performance examination. Med Educ. 2007;41:857–865.
32 Sayer M, Chaput De Saintonge M, Evans D, Wood D. Support for students with academic difficulties. Med Educ. 2002;36:643–650.
33 Wu AW, Folkman S, McPhee SJ, Lo B. Do house officers learn from their mistakes? JAMA. 1991;265:2089–2094.
34 Harthun NL, Schirmer BD, Sanfey H. Remediation of low ABSITE scores. Curr Surg. 2005;62:539–542.
35 Bowen JL. Educational strategies to promote clinical diagnostic reasoning. N Engl J Med. 2006;355:2217–2225.
36 Terry R, Hiester E, James GD. The use of standardized patients to evaluate family medicine resident decision making. Fam Med. 2007;39:261–265.
37 Fletcher KE, Stern DT, White C, Gruppen LD, Oh MS, Cimmino VM. The physical examination of patients with abdominal pain: The long-term effect of adding standardized patients and small-group feedback to a lecture presentation. Teach Learn Med. 2004;16:171–174.
38 Barley GE, Fisher J, Dwinnell B, White K. Teaching foundational physical examination skills: Study results comparing lay teaching associates and physician instructors. Acad Med. 2006;81(10 suppl):S95–S97.
39 Davidson R, Duerson M, Rathe R, Pauly R, Watson RT. Using standardized patients as teachers: A concurrent controlled trial. Acad Med. 2001;76:840–843.
40 Dornan T, Boshuizen H, King N, Scherpbier A. Experience-based learning: A model linking the processes and outcomes of medical students’ workplace learning. Med Educ. 2007;41:84–91.
41 Norman G. Building on experience—The development of clinical reasoning. N Engl J Med. 2006;355:2251–2252.
42 Norman G. Research in clinical reasoning: Past history and current trends. Med Educ. 2005;39:418–427.
43 Eva KW, Cunnington JP, Reiter HI, Keane DR, Norman GR. How can I know what I don’t know? Poor self assessment in a well-defined domain. Adv Health Sci Educ Theory Pract. 2004;9:211–224.
44 Holen A. The PBL group: Self-reflections and feedback for improved learning and growth. Med Teach. 2000;22:485–488.
45 Hauer KE, Teherani A, Kerr KM, Irby DM, O’Sullivan PS. Consequences within medical schools for students with poor performance on a medical school standardized patient comprehensive assessment. Acad Med. 2009;84:663–668.
46 Accreditation Council for Graduate Medical Education. ACGME Outcome Project. Chicago, Ill: Accreditation Council for Graduate Medical Education; 2005.
47 Abraham A. College Remedial Studies: Institutional Practices in the SREB States. Atlanta, Ga: SREB Information Office; 1992.
48 Langendyk V. Not knowing that they do not know: Self-assessment accuracy of third-year medical students. Med Educ. 2006;40:173–179.
49 Parker M, Luke H, Zhang J, Wilkinson D, Peterson R, Ozolins I. The “pyramid of professionalism”: Seven years of experience with an integrated program of teaching, developing, and assessing professionalism among medical students. Acad Med. 2008;83:733–741.