Skip Navigation LinksHome > October 2009 - Volume 84 - Issue 10 > Predicting Failing Performance on a Standardized Patient Cli...
Academic Medicine:
doi: 10.1097/ACM.0b013e3181b36f8b
Evaluating Clinical Skills in Standardized Settings

Predicting Failing Performance on a Standardized Patient Clinical Performance Examination: The Importance of Communication and Professionalism Skills Deficits

Chang, Anna; Boscardin, Christy; Chou, Calvin L.; Loeser, Helen; Hauer, Karen E.

Section Editor(s): Hauge, Linnea PhD; Day, Hollis MD

Free Access
Article Outline
Collapse Box

Author Information

Correspondence: Anna Chang, MD, 4150 Clement Street Box 181G, San Francisco, CA 94121; e-mail: (Anna.Chang@ucsf.edu).

Collapse Box

Abstract

Background: The purpose is to determine which assessment measures identify medical students at risk of failing a clinical performance examination (CPX).

Method: Retrospective case-control, multiyear design, contingency table analysis, n = 149.

Results: We identified two predictors of CPX failure in patient–physician interaction skills: low clerkship ratings (odds ratio 1.79, P = .008) and student progress review for communication or professionalism concerns (odds ratio 2.64, P = .002). No assessments predicted CPX failure in clinical skills.

Conclusions: Performance concerns in communication and professionalism identify students at risk of failing the patient–physician interaction portion of a CPX. This correlation suggests that both faculty and standardized patients can detect noncognitive traits predictive of failing performance. Early identification of these students may allow for development of a structured supplemental curriculum with increased opportunities for practice and feedback. The lack of predictors in the clinical skills portion suggests limited faculty observation or feedback.

Professional competence for physicians and trainees encompasses communication skills, professional attributes, clinical skills, and knowledge.1 Clinical performance examinations (CPX) serve as key assessments that require students to integrate these domains in realistic clinical encounters.2 Eighty-four percent of U.S. medical schools administer a CPX during or after the core clerkship year.3 Students who fail the CPX or the United States Medical Licensing Examination (USMLE) Step 2 Clinical Skills examination often demonstrate multiple interrelated deficits in these domains. Identification of these students toward the end of medical school presents challenges because of the limited time available for remediation. Residency applications, away rotations, and research projects also may distract students from remediating core skills. Finally, students may also resent being labeled as deficient after passing courses and clerkships.

Early identification of trainees’ deficits allows for prompt intervention to improve performance. In childhood and college education, early identification and intervention strategies are widely employed for a variety of developmental and cognitive deficits. These efforts yield improvements in assessment scores, allowing learners to progress academically with their peers.4 From a constructivist perspective, such strategies provide learners the individual guidance, structured practice, and specific feedback necessary for their progress. Unfortunately, although medical schools can anticipate multiple benefits from early systematic identification and remediation of performance deficits, medical students typically receive insufficient guidance on their communication and clinical skills because of inadequate direct observation of their performance with patients and reluctance of their supervisors to give constructive feedback.5,6

The purpose of this study is to determine which assessment measures in our medical school curriculum identify students at risk of failing the patient–physician interaction or clinical skills portions of the CPX after core clerkships.

Back to Top | Article Outline

Method

Study sample

Back to Top | Article Outline
Design
Retrospective case-control study.

Students at the University of California, San Francisco School of Medicine (UCSF) who failed the CPX of 2005 through 2007 in either patient–physician interaction (n = 30) or clinical skills (n = 33) made up the study group. For the control group (n = 119) of students who passed the CPX from the same classes as the study group, we randomly selected twice the number of failing students to enhance the power of the study.

UCSF requires a CPX at the start of the fourth year of medical school, developed by the California Consortium for the Assessment of Clinical Competence, which includes eight encounters representing acute and chronic ambulatory presentations across disciplines. Students perform a focused history and physical examination, develop a differential diagnosis and plan, and communicate their impressions to patients. Standard setting is normative. History and physical exam scoring checklists are case-dependent; the communication skills checklist is adapted from a validated tool.7 Our CPX 2005–2007 has an internal consistency (Cronbach alpha) of .64 to .89. At UCSF, scores are reviewed within two weeks of the exam by the CPX exam committee, which includes the CPX director, the remediation coordinator, a dean, and a data analyst. Students with z scores less than −1.75 in patient–physician interaction or clinical skills are defined as failing and required to remediate. One faculty remediation coordinator (A.C.) watched videotaped cases from failing students to verify scores and develop learning prescriptions.

UCSF’s institutional review board approved the study.

Back to Top | Article Outline
Measures/procedures

We gathered data on students’ gender, Medical College Admission Test (MCAT) scores, and USMLE Step 1 scores. We collected preclerkship, clerkship, and student progress review data (described below) as predictor variables from years 1 to 3 of medical school for each student. We assigned items from each predictor data source into two predictor categories, communication/professionalism skills (representing the relational, affective, and moral domains of medical practice) and clinical skills (cognitive and integrative domains).1 We combined communication and professionalism skills into one predictor category for several reasons. Specific communication behaviors are associated with decreased malpractice risk.8 Professionalism is context-dependent in clinical scenarios that involve communication.9 Finally, students and patients are more likely to define professional behaviors in clinical encounters in terms of communication skills.10,11

We created six predictor variable scores (three communication/professionalism skills and three clinical skills). For each variable, we combined several rating occasions to create a cumulative score for each student. Cumulative scores were then dichotomized for each predictor variable into one (one or more predictors) and zero (no predictors). Variables are described in detail below.

Back to Top | Article Outline
Preclerkship clinical skills course ratings.

Faculty evaluated students five times across two years in outpatient clinical preceptorships and weekly small-group sessions. We defined a predictor as a rating of “unsatisfactory” on a scale of unsatisfactory/satisfactory/exceptional. Scoring “unsatisfactory” in any of the following items generated a communication/professionalism skills preclerkship predictor score of 1: reliability, responsibility, motivation, maturity, initiative, flexibility, respect, rapport. Scoring “unsatisfactory” in any of the following items generated a clinical skills preclerkship predictor score of 1: history taking, physical examination, medical knowledge.

Back to Top | Article Outline
Clerkship ratings.

Clerkship directors compose summary evaluations of students’ performance based primarily on evaluations by faculty and residents in seven required core clerkships (family medicine, internal medicine, neurology, obstetrics–gynecology, pediatrics, psychiatry, surgery). We defined a predictor as a rating below 3 (“good”) on a 4-point scale. Scoring below 3 in any of the following items on any clerkship generated a communication/professionalism skills clerkship predictor score of 1: responsibility, self-improvement, relationships with patients, relationships with the health care team. Scoring below 3 in any of the following items generated a clinical skills clerkship predictor score of 1: history taking, physical examination, oral presentation, fund of knowledge, record keeping, problem solving.

Back to Top | Article Outline
Student progress review.

Faculty course directors and education program leaders meet regularly to review student progress and develop action plans for students with performance concerns. All failing course grades require discussion; faculty are also instructed to identify students with concerns despite passing scores and grades. Two study authors (A.C., K.E.H.) coded blinded meeting minutes from study students’ years 1–3 to remove problems which appeared unrelated to performance, leaving only true performance problems for the dataset. The authors then categorized each problem as either communication/professionalism skills or clinical skills. Both investigators coded the first 20 comments together and reached consensus. One author (A.C.) coded all remaining comments with subsequent confirmation by the second author, a clinical course director and participant in student progress review meetings (K.E.H.). The two coders discussed discrepancies (18 out of 144 comments) to reach consensus.

We defined a predictor as discussions of concerns regarding progress. Documentation of one or more communication or professionalism concerns generated a communication/professionalism skills student progress review predictor score of 1. Examples of comments coded as communication/professionalism skills predictors include “difficulty relating to patients and staff” and “communication skills need improvement.” Documentation of one or more clinical skills problems generated a clinical skills student progress review predictor score of 1. Examples of comments coded as clinical skills predictors include “difficulty synthesizing clinical information for patient care” and “concern about fund of knowledge and clinical problem-solving skills.”

Back to Top | Article Outline
Analysis

A contingency table analysis with chi-square test of independence was used to examine the relationship between the probabilities of CPX failure as a function of potential predictors. This statistical method is used extensively to analyze relationships between two categorical variables. With nominal data, a chi-square test is applied to the contingency table to compute the odds ratio. All analyses used SPSS 16.0 version.

Back to Top | Article Outline

Results

From three academic years, our sample of 149 students included 30 students who failed the patient–physician interaction portion of the CPX and 33 who failed the clinical skills portion. Thirteen students failed both portions. There were no statistically significant differences between failing and control students on key demographic variables: gender (42% women in failing, 52% in control; P = .27), average MCAT biological sciences (11.36 [SD = 1.54], 11.31 [1.58]; P = .86), MCAT physical sciences (11.38 [1.77], 11.38 [2.00]; P = .99), MCAT verbal (10.06 [1.48], 10.60 [1.68]; P = .06), and USMLE Step 1 (223 [21.18], 229 [18.98]; P = .11).

Table 1 summarizes the results. We identified two significant predictors of failing the patient–physician interaction portion of the CPX: low clerkship ratings of communication/professionalism skills (odds ratio 1.79, P = .008) and discussion at student progress review for communication/professionalism concerns (odds ratio 2.64, P = .002). No preclerkship, clerkship, or student progress review predictors were associated with failing the clinical skills portion of the CPX.

Table 1
Table 1
Image Tools
Back to Top | Article Outline

Discussion

Students identified even once during clerkships or at student progress review meetings with communication and professionalism skills deficiencies were at increased risk of failing the patient–physician interaction portion of CPX. This finding suggests that supervising faculty and residents on clerkships may identify similar communication and professionalism deficits that standardized patients detect on CPX, even though such noncognitive deficits may be difficult to quantify. This association is consistent with studies indicating that noncognitive traits and professionalism deficiencies may be predictive of medical school performance.12–14

Our finding that communication and professionalism problems predict poor CPX performance is noteworthy. Deficits in noncognitive domains are generally more persistent than clinical skills problems, which may be more amenable to simple technique lessons.15 Although faculty may detect students’ performance problems in these domains in the first three years, they may be more comfortable or skilled in providing coaching, feedback, and practice for students with clinical skills problems than for those with communication/professionalism deficits. Thus, these noncognitive problems persist because they are either not remediated by faculty or inherently not amenable to remediation.

The dearth of predictors of poor performance in clinical skills has several potential explanations. Most third-year students are not observed by faculty while performing histories and physical examinations on clerkships.16 When students are observed, many faculty do not have the expertise and rating tools to provide reliable assessments of performance. Even for the minority of students who are observed, Dudek et al17 describe barriers to faculty reporting of poor performance: lack of documentation, lack of knowledge of what to document, anticipation of an appeals process, and lack of remediation options. The lack of high correlation between clerkship grades and CPX scores indicates that these assessments measure different skills. Clerkship grades may be based on students’ oral presentations, functioning within a complex clinical environment, and team interactions, which are not typically assessed in CPX. Our findings support the use of multiple assessment tools.

Preclerkship clinical skills course evaluations did not predict failing performance in CPX. The nature of clinical skills training in the first two years is mainly formative, and the assessment instruments may be insufficiently sensitive to detect deficits. With MCAT component scores >9 and high institutional selectivity, few students encounter adverse academic events in the first two years, decreasing the likelihood of predictors from this data source.18 Because our data suggest that some CPX performance problems can be predicted on the basis of third-year performance, even earlier identification of significant deficits could be possible with a more integrated, longitudinal, and competency-based model of medical education that spans the traditional preclinical and clerkship years.

Limitations of this study include the retrospective design using existing evaluations from a single institution. Different definitions for predictors might change the sensitivity to detect the relationship between the severity of a student’s performance problems and subsequent performance. For example, three student progress review comments regarding personal and family illness were excluded as predictors, which may have resulted in bias if they represented underlying performance problems. However, inclusion likely would have enhanced our findings. We did not collect all evaluation measures, such as written examination performance or individual faculty evaluation comments, which might be associated with CPX performance, although very low performance on any assessment would be reflected in the clerkship evaluations and student progress review comments. Finally, we did not incorporate information from CPX videotapes or learning prescriptions to characterize the nature of failing students’ performance problems, a step that might be helpful in elucidating the nature of communication/professionalism skills deficits.

Back to Top | Article Outline

Conclusions

This study is the first to indicate that predictors can identify students at higher risk of failing the patient–physician interaction portion of a CPX. The study also highlights the importance of even a single episode of communication and professionalism performance concern during medical school. Although our results may be school-specific, the implications that predictors exist for early identification of student performance problems in these domains generalize for medical schools using summary clerkship ratings or a student progress review process. Because standardized patient workshops are effective in improving interviewing skills in third-year medical students, early identification of at-risk students may allow for development of a structured supplemental curriculum with increased opportunities for practice and feedback.19 Further study is needed to characterize the types of patient–physician interaction problems so that interventions can be tailored to optimize their performance.

Back to Top | Article Outline

Acknowledgments

The authors thank the UCSF Haile T. Debas Academy of Medical Educators and Office of Medical Education for funding, Bonnie Hellevig for data acquisition, and Patricia O’Sullivan, Arianne Teherani, Maxine Papadakis, and David Irby for study design guidance and manuscript review.

Back to Top | Article Outline

References

1 Epstein RM, Hundert EM. Defining and assessing professional competence. JAMA. 2002;287:226–235.

2 Howley LD. Performance assessment in medical education: Where we’ve been and where we’re going. Eval Health Prof. 2004;27:285–303.

3 Hauer KE, Hodgson CS, Kerr KM, Teherani A, Irby DM. A national study of medical student clinical skills assessment. Acad Med. 2005;80(10 suppl):S25–S29.

4 Fletcher JM, Foorman BR. Issues in Definition and Measurement of Learning Disabilities: The Need for Early Intervention. Baltimore, Md: Paul H. Brookes Publishing Co; 1994.

5 Howley LD, Wilson WG. Direct observation of students during clerkship rotations: A multiyear descriptive study. Acad Med. 2004;79:276–280.

6 Colletti LM. Difficulty with negative feedback: Face-to-face evaluation of junior medical student clinical performance results in grade inflation. J Surg Res. 2000;90:82–87.

7 Lang F, McCord R, Harvill L, Anderson D. Communication assessment using the common ground instrument: Psychometric properties. Fam Med. 2004;36:189–198.

8 Levinson W, Roter DL, Mullooly JP, Dull VT, Frankel RM. Physician–patient communication. The relationship with malpractice claims among primary care physicians and surgeons. JAMA. 1997;277:553–559.

9 Ginsburg S, Regehr G, Hatala R, et al. Context, conflict, and resolution: A new conceptual framework for evaluating professionalism. Acad Med. 2000;75(10 suppl):S6–S11.

10 Mazor KM, Zanetti ML, Alper EJ, et al. Assessing professionalism in the context of an objective structured clinical examination: An in-depth study of the rating process. Med Educ. 2007;41:331–340.

11 Wagner P, Hendrich J, Moseley G, Hudson V. Defining medical professionalism: A qualitative study. Med Educ. 2007;41:288–294.

12 Shen H, Comrey AL. Predicting medical students’ academic performances by their cognitive abilities and personality characteristics. Acad Med. 1997;72:781–786.

13 Ferguson E, James D, Madeley L. Factors associated with success in medical school: Systematic review of the literature. BMJ. 2002;324:952–957.

14 Murden RA, Way DP, Hudson A, Westman JA. Professionalism deficiencies in a first-quarter doctor–patient relationship course predict poor clinical performance in medical school. Acad Med. 2004;79(10 suppl):S46–S48.

15 Hauer KE, Teherani A, Kerr KM, O’Sullivan PS, Irby DM. Student performance problems in medical school clinical skills assessments. Acad Med. 2007;82(10 suppl):S69–S72.

16 Holmboe ES. Faculty and the observation of trainees’ clinical skills: Problems and opportunities. Acad Med. 2004;79:16–22.

17 Dudek NL, Marks MB, Regehr G. Failure to fail: The perspectives of clinical supervisors. Acad Med. 2005;80(10 suppl):S84–S87.

18 Huff KL, Fang D. When are students most at risk of encountering academic difficulty? A study of the 1992 matriculants to U.S. medical schools. Acad Med. 1999;74:454–460.

19 Yedidia MJ, Gillespie CC, Kachur E, et al. Effect of communications training on medical student performance. JAMA. 2003;290:1157–1165.

Cited By:

This article has been cited 1 time(s).

Journal of the American Podiatric Medical Association
Improving the Standardized Patient Experience
Osbourne, A
Journal of the American Podiatric Medical Association, 102(6): 477-484.

Back to Top | Article Outline

© 2009 Association of American Medical Colleges

Login

Article Tools

Images

Share