Skip Navigation LinksHome > October 2006 - Volume 81 - Issue 10 > Impact of the United States Medical Licensing Examination St...
Academic Medicine:
doi: 10.1097/01.ACM.0000236531.32318.02
Psychometric Properties of USMLE Step 2

Impact of the United States Medical Licensing Examination Step 2 Clinical Skills Exam on Medical School Clinical Skills Assessment

Hauer, Karen E.; Teherani, Arianne; Kerr, Kathleen M.; O’Sullivan, Patricia S.; Irby, David M.

Section Editor(s): Wood, Tim PhD; Juul, Dorthea PhD

Free Access
Article Outline
Collapse Box

Author Information

Correspondence: Karen E. Hauer, MD, University of California, San Francisco, 533 Parnassus Ave., Box 0131, U137, San Francisco, CA 94143-0131; e-mail: (khauer@medicine.ucsf.edu).

Collapse Box

Abstract

Background: Medical schools face a new responsibility to prepare students for the United States Medical Licensing Exam Step 2 Clinical Skills (CS) exam.

Method: We conducted semistructured interviews with 25 leaders of medical school clinical skills assessments to explore purposes of in-house assessment and the impact of the Step 2 CS exam. Interviews were coded to identify major themes.

Results: Competency assessment, student and curricular feedback, and preparation for the licensing exam emerged as major purposes of in-house exams. Participants asserted that in-house exams assessed faculty-identified competencies, although not all schools had defined competencies. Limited resources made balancing formative and summative assessment goals problematic for some schools. Curricular feedback was general but valued. All schools, even those that disagreed with aspects of the licensing exam, acknowledged their roles in preparing students for the Step 2 CS.

Conclusion: An external licensing requirement engenders debate and motivates changes in clinical skills assessment and, in some cases, curricula.

The United States Medical Licensing Exam (USMLE) Step 2 Clinical Skills (CS) exam constitutes a new requirement designed to standardize clinical skills assessment and ensure minimum competence for those seeking licensure as physicians. The exam has generated debate within medical schools about their role in preparing students for clinical practice and for the licensing exam.1 High registration fees, travel expenses, and limited feedback describing performance have dampened student enthusiasm for the new requirement.2

Educators hoped that the Step 2 CS would motivate schools to enhance clinical skills curricula and testing in undergraduate medical education. The majority of schools now administer a comprehensive standardized patient assessment at the end of the core clerkships.3 However, the extent to which the national licensing requirement is promoting greater focus on content emphasized in the Step 2 CS is poorly understood. One study found that 55% of medical schools had modified their curriculum in response to the Step 2 CS exam, although the nature and extent of those modifications were not described.4

We designed this study to explore the following questions with the directors of established clinical skills assessment programs: (1) what is the purpose of in-house clinical skills assessment after a major change in licensing requirements; and (2) how is the Step 2 CS affecting medical school clinical skills curricula and assessment?

Back to Top | Article Outline

Method

In a prior survey of curriculum deans,3 we identified schools that administered comprehensive clinical skills assessments and requested the names of individuals responsible for standard setting and remediation. A “comprehensive assessment” was defined as a multistation, cross-disciplinary exam outside of a single clerkship involving standardized patients. In Fall 2005, we randomly selected 44 of the 62 identified individuals and extended invitations to participate in an interview study. Potential subjects received up to two phone or e-mail invitations to participate. One subject declined the invitation. We interviewed 25 of the 44 invited subjects, stopping when thematic saturation was reached. The University of California, San Francisco, Institutional Review Board approved the study.

We chose a qualitative methodology because closed-ended survey questions might not have captured the complexity of participant responses. Instead, we were able to elicit and consider respondents’ open and in-depth descriptions of the internal and external forces influencing their schools’ approaches to clinical skills training and assessment.

One investigator (KMK) conducted semistructured telephone interviews lasting 30–60 minutes. Participants provided verbal informed consent. The five investigators developed the interview instrument based on group expertise and results of prior work.3 The interview included open-ended questions addressing participant perceptions of and experience with comprehensive assessments, as well as perceived impact of the Step 2 CS exam. After 14 interviews, the investigators refined the interview guide by adding probes, improving flow, and eliminating redundancy. The additional probes were designed to ensure that interviewees elaborated on the rationale for their current exam design, as well as the purposes of the exam in relation to other curricular and assessment activities. Interviews were recorded and transcribed verbatim. The interviewer reviewed all transcripts for accuracy prior to analysis.

Interviews were coded to identify major themes. Three investigators (KH, AT, KMK) independently read three transcripts to generate codes, which were combined and reconciled. Four more transcripts were used to generate additional codes and refine previously identified codes. These three investigators used the refined code list to code 10 transcripts, and the remaining 15 transcripts were coded by two investigators. Discrepancies were discussed among the investigators until consensus was reached. All five investigators met to review the coded data and identify larger themes. ATLAS.ti V 5.0 software (Scientific Software Development GmbH, Berlin) was used to organize and retrieve coded data.

Back to Top | Article Outline

Results

The 25 participating schools represented all four U.S. geographic regions as defined by the American Association of Medical Colleges. Thirty-six percent of participating schools were private institutions and 64% were public—an exact match of the national distribution. Ten schools in the sample had been conducting exams for 10 or more years, whereas six schools had been conducting exams for three or fewer years.

Respondents reported that their in-house exams serve multiple purposes: assessing competence, providing feedback to both curriculum and students, and preparing students for the Step 2 CS exam. One participant explained:

We had a lot of debate about what’s the purpose of this exam, are we teaching to what was then a future clinical skills test, are we trying to ensure that our students are competent in their clinical skills before they leave us, are we trying to get information and feedback to the curriculum about student strengths and weaknesses, and knowledge base and skills. And we decided yes, all of those things.

Prioritization and emphasis of these purposes varied, as did perceptions of how the exam needed to be structured to serve each defined goal.

Back to Top | Article Outline
Competency assessment

A major purpose of the in-house assessment was to ensure that students graduated with competency as defined by local faculty. Many schools revisited the need to conduct local assessments of basic competence in the setting of the new licensing requirement:

We’ve had those conversations: do we want to continue this or not? We’ve said yes, we’re going to continue it. One for practice, and two as a school we wanted to make sure that our students were graduating with these skills.

Most schools believed their in-house exams were more difficult than the Step 2 CS exam, a difference which they felt made their in-house programs superior both as measures of competence and as preparatory exercises for the national exam. Schools noted the importance of using their internal exams as a means of assessing competencies that were highly valued by their faculty but which were unlikely to be addressed in the national exam:

We were interested in looking at oral presentation skills, so we built that in. We were also looking at test interpretation, so we build that in, and they don’t necessarily include that in the USMLE Step 2. We also tried to step up the level of our exam—my impression is that the USMLE Step 2 looks at common ambulatory problems. Very basic. So we try to make our patients a little bit more complex, a little bit richer for the students.

Cases that challenged student skills with cultural competence, evidence-based medicine, and communication with complicated patients were referenced as local priorities not addressed in the national exam. Participants also cited numerous limitations of clerkship evaluations in assessing clinical skills competence, including grade inflation, reliance on knowledge assessments, and insufficient observation of students. For many, these deficiencies elevated the importance of conducting a comprehensive exam.

Conflicts arose when the faculty had not defined or agreed on competencies. One participant explained:

Does this in-house exam look like the curriculum? When we don’t have competencies driving the discussion, it’s a little sketchy, because we don’t have faculty saying every student should know how to do the following.

Schools where students had failed the national exam at higher than expected rates were left to consider whether locally defined competencies aligned with the expectations of the national board. One of these schools described the challenge:

To make sure my students pass the USMLE, I have to dummy them down a little bit. They exhibit such sophisticated skills that they do things that are not captureable. And they skip over some of the more dreary, routine things that these kinds of scoring require.

Back to Top | Article Outline
Feedback to students

Feedback to students was considered a critical purpose of the in-house exam. Schools diverged in their beliefs about the optimal balance of formative and summative feedback. Although all schools provided some feedback to all students, as a rule students with the lowest scores received the most information and follow-up. Many participants noted that the in-house exam created opportunities to engage students who had not previously received or responded to feedback about their performance:

It gives me an opportunity to take a student who has an odd affect and now there’s data. Instead of me just saying, ‘Gee, this may be a problem; you don’t make good eye contact,’ I can say now, ‘This is a problem. You’re not making good eye contact. It’s been noted in your clerkships and, here on the OSCE, you did poorly.

Many participants identified barriers to student feedback. Some schools that were heavily invested in administering a high-stakes exam offered no feedback to students during the exam and restricted student access to checklists or videotapes afterward. Others cited limited faculty time as a barrier to providing individual feedback. Another challenge was to ensure that students were focusing on concepts, as opposed to cases, when considering feedback on their performance.

Students asked to see their checklist items, thinking that would help them—it was too vague to say that they had messed up on the history or physician patient interaction. But we didn’t give it to them purely because we felt it’s not just about this exam. So we didn’t want to train them too specifically to memorize certain skills.

Participants described a tension between channeling resources for summative assessment versus clinical skills education:

If you’re doing high-stakes exams, the students are not getting feedback because you can’t do that with a high-stakes exam. If schools are using it as an instructional tool then I think that’s great. It would be nice to be able to do that and then give feedback after each station by faculty members. It takes a lot of faculty members to sit and watch all the stations. We did that a couple of years and it was just a huge undertaking.

Participants uniformly observed that the USMLE exam promoted student buy-in to standardized patient exams and enhanced students’ motivation to receive feedback on their clinical skills. Many schools devised strategies for providing such feedback, including class meetings, class letters highlighting common errors, and, at sites where policy did not prohibit providing detailed feedback on individual performances, individual score reports or videotape reviews. One school with a longstanding exam and a high degree of faculty commitment to it combined a format similar to the national exam with faculty observers who provided individualized teaching and feedback at each station. Another school with a small class assigned a dean to each student for score review and learning plan development.

Back to Top | Article Outline
Feedback to curriculum

Most respondents believed that exam data helped with curriculum evaluation, but few described specific, effective feedback mechanisms. Many described reporting exam results to educational leaders, though the efficacy of this practice varied:

I go to the clerkship directors after each year’s results are available. I go to their meeting and they’re always so upset that the students don’t demonstrate the depth of questioning and feel that they need to teach that. They do teach that and they’ll teach it again, and next year it’s the same. So, it’s not clear we’re getting anywhere with that.

At other schools, exam performance data describing pervasive deficiencies in students’ skills prompted new training efforts earlier in the clinical skills curriculum, such as special lectures or specific physical exam teaching in clerkships.

Back to Top | Article Outline
Exam preparation

All participants acknowledged the need to use in-house assessments as a tool for preparing students for the licensing exam. Overwhelmingly, schools followed the USMLE format when designing new in-house assessments, and those with longstanding exams implemented formatting changes that increased their exams’ similarity to the Step 2 CS. Common modifications included adjusting the number and length of stations and the interstation format.

Although many schools viewed these changes positively, others were resigned to making changes for the purpose of national exam preparation. One school that had modified the in-house assessment format did so begrudgingly:

We’ve altered what we ask the students to do at the close of the transaction. I had designed individual post visit assignments. And I changed all that and I went to writing SOAP notes, because that’s what you’ve got to do there. So it has crimped our style.

Still, most schools did not feel that changing their exam format to mirror the USMLE significantly altered the primary purpose of their internal exam:

We have always considered clinical skills to be very, very important here, and it’s always been a high priority. So I don’t think USMLE changed the priority. I think in terms of the in-house exam that it’s made the students more interested in using it as a preparation assessment for the thousand-dollar test.

Two schools with longstanding clinical skills assessment programs felt that the licensing exam was redundant, though they agreed with the principle of national standards for clinical skills competency. Nonetheless, both had modified the format of their in-house assessments to increase similarity to the Step 2 CS.

Back to Top | Article Outline
Local exams and the national exam

Some schools with new in-house exams described struggles with basic administrative issues such as timing of administration, scoring methodology, and identifying mechanisms for reporting results to students and curricular leaders. These schools faced the dual challenge of securing faculty and student buy-in for their internal exams while also determining the extent to which their local efforts should be influenced by the new national requirement. In contrast, schools with established programs described firm values regarding student assessment and integration of standardized patient experiences throughout the curriculum:

I’m just happy that we were able to set up our comprehensive exam before the CS exam came down the pike, because I feel that it was the right thing to do with our curriculum. We now have standardized patient exams starting in week 8 of our curriculum. It’s how we set up our model of education.

The presence of the Step 2 CS exam as a measure of minimum competence led some schools to reassess the purpose of in-house exams. These schools did not plan to abandon standardized patient experiences, but rather to change the emphasis of their efforts:

When the Board Exam became required for licensure, we began talking about: Do we need our exam as a high stakes exam? Can we take the funding and put that into a formative program within the context of the clerkships?

All participants endorsed the national exam and in-house exams as validating the importance of clinical skills for medical students:

It elevates clinical skills more to a par with the cognitive skills that up until now have been alone in what counts as a qualified student. The students of course complain a great deal about the inconvenience, the money, the travel, the time. As do we. But I am deeply grateful that the students now see, in terms that they cannot misunderstand, that the medical establishment requires demonstration of these interpersonal skills before they’re going to qualify them as a doctor.

Back to Top | Article Outline

Discussion

The Step 2 CS exam has motivated schools with established clinical skills assessments to view their exams not only as competency assessments but also as preparatory experiences for the licensing exam. Many schools have designed new exams or modified existing exams to reflect the Step 2 CS format, changes that many students and faculty perceive as important for familiarizing students with a testing situation in a realistic, lower-stakes setting.

The Step 2 CS was created in part by a desire to focus attention on clinical skills training.5 Our results suggest that faculty and students have responded to the emphasis on testing clinical and interpersonal skills with enhanced motivation to ensure competency. Experience with other high-stakes testing indicates that learners perform better when they are given the opportunity to participate in setting learning goals and practice required skills,6 and the USMLE exam creates an institutional obligation to provide these experiences.

Schools in our study varied in the perception that the national exam created a burdensome obligation to “teach to the test.” Whatever their enthusiasm for the USMLE exam, schools responded consistently to the licensing exam by mimicking its format, changes that were motivated by a desire to make their in-house exams more useful as preparation for licensure. However, to be truly useful for learning, a high-stakes in-house exam must be preceded by formative assessments that are based on the learning goals established by the teacher and the institution.7 Schools that articulated clear values about clinical skills training that predated the Step 2 CS exam were able to incorporate the external mandate into their training programs with relative ease. In contrast, many schools were still developing mechanisms to use assessment information to enhance the curriculum and provide individual student feedback. Whether the quantity, quality, and utility of this curricular and student feedback will increase over time is unclear.

Our study is limited in that participants all had established clinical skills assessment programs; the minority of schools without such programs may perceive different challenges in clinical skills training. Further, our participants’ perceptions may differ from those of other faculty at their institutions. However, the study is strengthened by the fact that we interviewed individuals who were intimately involved in developing and implementing comprehensive clinical exams, individuals whose views are critical to understanding issues in clinical skills assessment. We had a large sample size for a qualitative study, which yielded a range and depth of information about a dynamic topic.

This study illustrates the ways in which medical schools are incorporating a new licensing requirement into their vision of clinical skills training, and the tensions that influence defining competency, providing opportunities for formative and summative assessment, allocating resources, and preparing students for licensure.

Back to Top | Article Outline

Acknowledgments

The authors thank the Josiah Macy Jr. Foundation and the participating schools.

Back to Top | Article Outline

References

1 Papadakis MA. The Step 2 clinical-skills examination. N Engl J Med. 2004;350:1703–5.

2 A critique of the USMLE Clinical Skills Examination (http://www.medscape.com/viewarticle/503527). Accessed 27 January 2006.

3 Hauer KE, Hodgson CS, Kerr KM, Teherani A, Irby DM. A national study of medical student clinical skills assessment. Acad Med. 2005;80:S25–9.

4 Wartman SA, Littlefield JH. Changes in the US Medical Licensure Examination and impact on US medical schools. JAMA. 2005;293:424–5.

5 Medical schools and students: the impact of USMLE Step 2 CS. National Board of Medical Examiners (NBME) Examiner. 2005;52:4.

6 Gulek C. Preparing for high-stakes testing. Theory into Practice. 2003;42:42–5.

7 Guskey TR. How classroom assessments improve learning. Educ Leadership. 2003;60:6–11.

© 2006 Association of American Medical Colleges

Login

Article Tools

Share