Skip Navigation LinksHome > April 2011 - Volume 86 - Issue 4 > Can Students' Scores on Preclerkship Clinical Performance Ex...
Academic Medicine:
doi: 10.1097/ACM.0b013e31820de435
Clinical Performance

Can Students' Scores on Preclerkship Clinical Performance Examinations Predict That They Will Fail a Senior Clinical Performance Examination?

Klamen, Debra L. MD, MHPE; Borgia, Peter T. PhD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Klamen is associate dean of education and curriculum, Southern Illinois University School of Medicine, Springfield, Illinois.

Dr. Borgia is professor, Department of Medical Microbiology, Immunology and Cell Biology, Southern Illinois University School of Medicine, Springfield, Illinois.

Correspondence should be addressed to Dr. Klamen, 801 N. Rutledge Avenue, P.O. Box 19622, Springfield, IL 62794; telephone: (217) 545-7932; fax: (217) 545-0192; e-mail: dklamen@siumed.edu.

First published online February 21, 2011

Collapse Box

Abstract

Purpose: This study was designed to determine whether preclerkship performance examinations could accurately identify medical students at risk for failing a senior clinical performance examination (CPE).

Method: This study used a retrospective case–control, multiyear design, with contingency table analyses, to examine the performance of 412 students in the classes of 2005 to 2010 at a midwestern medical school. During their second year, these students took four CPEs that each used three standardized patient (SP) cases, for a total of 12 cases. The authors correlated each student's average year 2 case score with the student's average case score on a senior (year 4) CPE. Contingency table analysis was carried out using performance on the year 2 CPEs and passing/failing the senior CPE. Similar analyses using each student's United States Medical Licensing Examination (USMLE) Step 1 scores were also performed. Sensitivity, specificity, odds ratio, and relative risk were calculated for two year 2 performance standards.

Results: Students' low performances relative to their class on the year 2 CPEs were a strong predictor that they would fail the senior CPE. Their USMLE Step 1 scores also correlated with their performance on the senior CPE, although the predictive values for these scores were considerably weaker.

Conclusions: Under the conditions of this study, preclerkship (year 2) CPEs strongly predicted medical students at risk for failing a senior CPE. This finding opens the opportunity for remediation of deficits prior to or during clerkships.

Multiple measures exist along the continuum of undergraduate medical education to reassure educators that students are progressing toward the level of competence needed to begin the practice of medicine. To this end, perhaps no single examination attempts to measure students' ability to interact with patients in a noncued, realistic setting more than the objective structured clinical examination (OSCE) known as the senior clinical performance examination (CPE).1 Recognizing the importance of such an examination, many U.S. medical schools now include one in their curricula.2 Since 2004, the United States Medical Licensing Examination (USMLE) has had a 10-station clinical skills examination (USMLE Step 2 Clinical Skills [CS]) that all medical students must take as part of a three-step testing process prior to licensure to practice medicine.3 Like the USMLE Step 2 CS, the senior CPE measures communication skills, clinical skills, knowledge, and professionalism, requiring students to integrate these skills in a manner similar to what will be expected of them as residents and practicing physicians.

Failures of senior CPEs are not infrequent,4 and remediation is difficult for a number of reasons. In the United States, these examinations typically occur during the fourth (and final) year of medical school, when students have only a short time remaining in the curriculum and are busy with elective rotations, off-campus rotations, residency program interviews, and USMLE examinations (both the Step 2 Clinical Knowledge and the Step 2 CS). Students do not react well to receiving the news that they may need to remediate a fourth-year examination. Reordering important elective rotations to allow needed remediation time is difficult logistically. Further, most remedial programs are not designed to meet the failing student's specific deficiency, so faculty frequently must assign the student a clinical rotation with a single preceptor, who often feels as if he or she must give the student a passing grade.5

Finding a way to reliably predict which students may fail their senior CPEs therefore seems to be a worthwhile pursuit. If predictors could identify these students in the preclerkship years, for example, there may be ample time available for remediation. Focused practice on the deficits elucidated by these predictors could occur over the entire third year of clinical clerkships. Accurate predictors of failure have, however, remained elusive. Papadakis et al6 found that students whose behavior was unprofessional during medical school had higher rates of disciplinary action by medical boards once they were in practice. They also found that poor grades in the first two years of medical school predicted higher rates of disciplinary action, but this association was not as strong. Chang et al7 found two predictors of senior CPE failure: low clerkship ratings and student progress reviews for communication or professionalism concerns. They found no clinical skills predictors, either in reviews of student progress or in preclerkship or clerkship performance ratings.

The purpose of this study was to determine whether students' performance on the preclerkship CPEs administered throughout the second year at a public midwestern medical school could predict whether they would fail the senior CPE. If, indeed, performance on the preclerkship CPEs reliably predicted failure on the senior CPE, ample time for performance remediation could be realized.

Back to Top | Article Outline

Method

Study design and participants

We conducted a retrospective case–control study using data from students in the Southern Illinois University School of Medicine classes of 2005 to 2010. Those who failed the senior CPE (n = 35) made up the study group. Students from the same classes who passed the senior CPE (n = 377) made up the control group. Twenty-five students in these classes had delayed entry into the senior year because of required remediation of particular curricular courses, leaves of absence, or participation in various enrichment activities; these students were omitted from the study. This research was approved as an exempt study through the Southern Illinois University School of Medicine's institutional review board.

Back to Top | Article Outline
Measures and procedures
Year 2 CPEs.

Year 2 CPEs are administered to second-year students at the end of each of four educational units throughout the year (one exam each in October, December, March, and May). These CPEs are OSCEs that measure students' clinical skills of history-taking and physical examination, communication skills, patient–doctor relationship-building skills, and clinical reasoning. Each of these examinations includes three standardized patient (SP) stations (cases), for a total of 12 SP cases during the year. An example of a case that has been used in a year 2 CPE is a 68-year-old man who presents in clinic complaining of shortness of breath, fatigue, and easy bruising for the past two months. His diagnosis is leukemia.

For each case, students interview and examine the SP and document their findings, diagnostic reasoning, and differential diagnosis; the diagnostic tests and procedures they would like to order; their final diagnosis; and the plan of care they would recommend. They enter this information into a computer program for grading. Faculty observers rate students on their history-taking and physical examination skills using a checklist specific to the case. SPs fill out a patient satisfaction checklist. Each case is scored on a 100-point basis (not including patient satisfaction scores, which are rated separately).

We used six years of year 2 CPE data (classes of 2005–2010, N = 412 students) in this study. For each year of the study, we computed each student's average percentage score for the 12 cases and determined each student's rank within the class on the basis of this score. We also determined each student's rank within the class using scores on the USMLE Step 1, which students took at the end of year 2.

Back to Top | Article Outline
Senior CPE.

The senior CPE is a 14-station (case) OSCE administered to students in July/August of their fourth year. Passing the examination is a requirement for graduation. This examination has been used at this medical school for more than 20 years and has been well described in the literature.8 The senior CPE consists of a series of 14 SPs presenting with an acute or chronic disease. Students take a history, perform a focused physical examination, and discuss their findings, diagnosis, and initial recommendations with the SPs. For 10 of the cases, students document their findings, differential diagnosis, diagnostic tests and procedures, final diagnosis, and plan on a computer immediately after the encounter. For the other four cases, students write a SOAP note (subjective, objective, assessment, plan) immediately after the encounter, paralleling the procedure used in the USMLE Step 2 CS. These SOAP notes are graded by trained physician raters. SPs fill out a patient satisfaction checklist, which is the same checklist used during the year 2 CPEs. Administration of the examination over the past six years (classes of 2005–2010) resulted in reliability values (Chronbach alpha) of 0.80, 0.69, 0.75, 0.82, 0.75, and 0.75, respectively.

Each of the cases is scored separately, and a percentage score for the case is assigned for each student. Minimal passing scores for the entire examination and for individual cases are assigned using the Hofstee method.9 For the classes of 2005 and 2006, a minimum average score for the 14 cases was used to make pass/fail decisions. For the classes of 2007 to 2010, a minimum number of cases passed was used to make pass/fail decisions on the examination as a whole: 8 cases passed for 2007 and 2008 and 10 cases passed for 2009 and 2010. Patient satisfaction is scored separately, and pass/fail decisions are also made on that portion of the exam. We did not use the patient satisfaction scores in this study.

Back to Top | Article Outline
Analysis

We used contingency table analysis with chi-square test of independence to analyze the relationship between the probability of failure of the senior CPE and performance on the year 2 CPEs or performance on USMLE Step 1. We used two standards for the year 2 average case score and for USMLE scores in the contingency tables: students whose scores were below 15% rank and below 20% rank within the class. We used Microsoft Excel 2007 (Microsoft Corp., Redmond, Washington) or SPSS version 16 (SPSS Inc., Chicago, Illinois) for all analyses.

Back to Top | Article Outline

Results

Table 1 summarizes our analyses using average score on the year 2 CPEs as the predictor of a failing score on the senior CPE. For the 412 students in our study, there was a moderate correlation between the average score for the 12 cases of the year 2 CPE and the average score for the 14 cases of the senior CPE (Pearson correlation coefficient = 0.58). Contingency table analyses performed using year 2 CPE scores that fell below either the 15% or 20% rank within the class as standards revealed that both standards have considerable value in identifying students at risk for failing the senior CPE. Below the 15% rank, the odds ratio (OR) was 20.67 (P = .000; 95% confidence interval [CI] = 9.46–45.09); below the 20% rank, the OR was 21.09 (P = .000; 95% CI = 9.24–48.03). The sensitivity and specificity for the 15% rank standard were 0.69 and 0.91, respectively, and for the 20% rank standard were 0.77 and 0.86, respectively. Although the CIs for the ORs for either standard were quite broad, the lower limit of the CI for the OR for each standard was sufficiently high to provide a strong predictor for failing the senior CPE.

Table 1
Table 1
Image Tools

Table 2 summarizes our analyses using USMLE Step 1 score as the predictor of a failing score on the senior CPE. For the 412 students in our study, there was a weak correlation between the average USMLE Step 1 score and the average score on the 14 cases of the senior CPE (Pearson correlation coefficient = 0.40). Contingency table analyses performed using USMLE Step 1 scores that fell below either the 15% or 20% rank within the class as standards revealed that both standards have some value in predicting students at risk for failure of the senior CPE. For students below the 15% rank, the OR was 2.99 (P = .007; 95% CI = 1.43–6.29); for those below the 20% rank, the OR was 3.53 (P = .001; 95% CI = 1.74–7.19). These ORs are considerably lower than the comparable values using the year 2 CPE scores in the contingency tables (Table 1). Likewise, the sensitivities when using USMLE Step 1 scores as a standard were lower than those when using year 2 CPE scores: When using 15% rank and 20% rank within the class, they were 0.34 and 0.43, respectively. Specificities when using 15% rank and 20% rank were 0.85 and 0.83, respectively.

Table 2
Table 2
Image Tools
Back to Top | Article Outline

Discussion

Our results indicate that performance on the year 2 CPEs is a robust early predictor of deficits in clinical skills and clinical reasoning ability when using either less than 15% rank or less than 20% rank within the class as the standard on the examination. Additionally, both standards provided reasonable sensitivities, respectively detecting 69% or 77% of students who subsequently failed the senior CPE. These results open an opportunity for early remediation of deficits identified prior to or during clerkships. The predictive value of the year 2 CPE, as measured by OR, is particularly desirable because it is considerably higher than and can generally be obtained at a much earlier point in student training than the predictors identified by Chang et al.7

The use of CPEs early in medical training is especially attractive because all students are evaluated under standardized conditions that approximate actual clinical practice. Other methods of evaluating student clinical performance in the preclerkship years, such as observation during clinical skills training, are generally treated as formative, and the assessments usually are not sensitive enough to detect student deficits. Indeed, Chang et al7 found that such assessments at the medical school they studied had no predictive value for a senior CPE. Evaluations of student clinical skills and clinical reasoning during clerkships also suffer from considerable difficulties, as pointed out by Hauer.4 Most often, such assessments are done by direct observations that generally are not systematic or comprehensive. Rather, such observations are frequently brief or sporadic, involve different patients, and may not encompass the entire student–patient interaction. Additionally, savvy students, aware that they are part of a team, can appear to perform better than their actual capabilities because they have received information from other team members (e.g., interns, resident). Lastly, many faculty members are loathe to give students poor or failing ratings because they believe they may not be able to reliably assess these skills or have other barriers, such as concerns about student grievances or lawsuits, that prevent negative evaluations.5

We also found that students' low performance relative to others in the class on USMLE Step 1 yielded a statistically significant increased risk of their failing the senior CPE. We believe that this relationship demonstrates the importance of knowledge, as measured by USMLE Step 1, to clinical performance. The predictive values of USMLE scores were, however, considerably lower and therefore less useful than those provided by the year 2 CPE scores. Moreover, the sensitivity of the USMLE as a predictor is low compared with that of the year 2 CPE. Clearly, the year 2 CPE has far greater utility. Simon et al10 showed a weak correlation between second-year students' OSCE scores and USMLE Step 1 scores, though they did not study the relationship with fourth-year clinical examinations. Interestingly, Chang et al7 found no relationship between USMLE Step 1 scores and performance on senior CPEs. We cannot account for the difference between their results and ours, but it could be due to differences in the CPEs at the two schools studied or in the student populations.

The use of preclerkship CPEs is limited to medical schools that employ a curriculum that introduces students to real and/or simulated patient cases and to SPs during the first and second years of medical school. Thus, a CPE during these years might be difficult for medical schools with more traditional curricula to implement. The preclerkship curriculum at our medical school uses a hybrid problem-based learning/lecture format for the organ-system-based curriculum. Both simulated cases and SPs are used starting in the first year of medical school, and clinical skills training is integrated into the entire curriculum.

Back to Top | Article Outline

Conclusions

We believe this is the first study that identifies strong predictors in the second year of medical school for students who are at high risk for failing a senior CPE. Although our study considers data from a single school, it suggests that other medical schools that use CPEs in the second year might be able to use a similar system to identify students at high risk. Thus, early remediation processes may be implemented for such students, such as additional work with SPs under the direct supervision of faculty, extra clinical skills work across the third year's clerkships, or a structured, targeted remediation process that has proven effective elsewhere.11 Indeed, the results of this study have led us to consider implementing a monthlong mandatory clinical performance remediation program for students in the bottom 15% of the class based on their year 2 CPE data. Further study is needed to characterize the specific deficiencies of students failing CPEs, though some literature already exists on the topic.12–15

Back to Top | Article Outline

Funding/Support:

None.

Back to Top | Article Outline

Other disclosures:

None.

Back to Top | Article Outline

Ethical approval:

This research was approved as an exempt study through the Southern Illinois University School of Medicine's institutional review board.

Back to Top | Article Outline

References

1 Barrows HS, Williams RG, Moy RH. A comprehensive performance-based assessment of fourth-year students' clinical skills. J Med Educ. 1987;62:805–809.

2 Barzansky B, Etzel SI. Medical schools in the United States, 2007–2008. JAMA. 2008;300:1221–1227.

3 Dillon GF, Boulet JR, Hawkins RE, Swanson DB. Simulations in the United States Medical Licensing Examination (USMLE). Qual Saf Health Care. 2004;13(suppl 1):i41–i45.

4 Hauer KE, Jodgson CX, Kerr KM, Teherani A, Irby DM. A national study of medical student clinical skills assessment. Acad Med. 2005;80(10 suppl):S25–S29. http://journals.lww.com/academicmedicine/Fulltext/2005/10001/A_National_Study_of_Medical_Student_Clinical.10.aspx. Accessed December 6, 2010.

5 Dudek NL, Marks MB, Regehr G. Failure to fail: The perspectives of clinical supervisors. Acad Med. 2005;80(10 suppl):S84–S87. http://journals.lww.com/academicmedicine/Fulltext/2005/10001/Failure_to_Fail_The_Perspectives_of_Clinical.23.aspx. Accessed December 6, 2010.

6 Papadakis MA, Teherani A, Banach MA, et al. Disciplinary action by medical boards and prior behavior in medical school. N Engl J Med. 2005;353:2673–2682.

7 Chang A, Boscardin C, Chou C, Loeser H, Hauer K. Predicting failing performance on a standardized patient clinical performance examination: The importance of communication and professionalism skills deficits. Acad Med. 2009;84(10 suppl):S101–S104. http://journals.lww.com/academicmedicine/Fulltext/2009/10001/Predicting_Failing_Performance_on_a_Standardized.26.aspx. Accessed December 6, 2010.

8 Vu NV, Barrows HS, Marcy ML, Verhulst SJ, Colliver JA, Travis T. Six years of comprehensive, clinical, performance-based assessment using standardized patients at the Southern Illinois University School of Medicine. Acad Med. 1992;67:42–50. http://journals.lww.com/academicmedicine/Abstract/1992/01000/Six_years_of_comprehensive,_clinical,.9.aspx. Accessed December 6, 2010.

9 Norcini JJ. Setting standards on educational tests. Med Educ. 2003;37:464–469.

10 Simon SR, Volkan K, Hamann C, Duffey C, Fletcher SW. The relationship between second-year medical students' OSCE scores and USMLE Step 1 scores. Med Teach. 2002;24:535–539.

11 Klamen DL, Williams RG. The efficacy of a targeted remediation process for students who fail standardized patient examinations. Teach Learn Med. 2011;23:3–11.

12 Klamen DL, Williams RG. The Diagnosis and Treatment of the Failing Student: Standardized Patient Examinations. Springfield, Ill: SIU Press; 2010.

13 Hauer KE, Teherani A, Kerr KM, O'Sullivan PS, Irby DM. Student performance problems in medical school clinical skills assessments. Acad Med. 2007;82(10 suppl):S69–S72. http://journals.lww.com/academicmedicine/Fulltext/2007/10001/Student_Performance_Problems_in_Medical_School.19.aspx. Accessed December 6, 2010.

14 Saxena V, O'Sullivan PS, Teherani A, Irby DM, Hauer KE. Remediation techniques for student performance problems after a comprehensive clinical skills assessment. Acad Med. 2009;84:669–676. http://journals.lww.com/academicmedicine/Fulltext/2009/05000/Remediation_Techniques_for_Student_Performance.31.aspx. Accessed December 6, 2010.

15 Hauer KE, Teherani A, Irby DM, Kerr KM, O'Sullivan PS. Approaches to medical student remediation after a comprehensive clinical skills assessment. Med Educ. 2008;42:104–112.

© 2011 Association of American Medical Colleges

Login

Article Tools

Images

Share