Home Current Issue Previous Issues Published Ahead-of-Print Collections For Authors Journal Info
Skip Navigation LinksHome > October 2000 - Volume 75 - Issue 10 > Validity of Faculty Ratings of Students' Clinical Competence...
Academic Medicine:
PAPERS: Truth and Consequences

Validity of Faculty Ratings of Students' Clinical Competence in Core Clerkships in Relation to Scores on Licensing Examinations and Supervisors' Ratings in Residency

CALLAHAN, CLARA A.; ERDMANN, JAMES B.; HOJAT, MOHAMMADREZA; VELOSKI, J. JON; RATTNER, SUSAN; NASCA, THOMAS J.; GONNELLA, JOSEPH S.

Section Editor(s): Camp, Gwedie PhD

Free Access
Article Outline
Collapse Box

Author Information

Correspondence: Clara Callahan, MD, Admissions Office, Jefferson Medical College, Philadelphia, PA 19107-5833; e-mail: 〈clara.callahan@mail.tju.edu〉.

Connections between assessment measures in medical school, residency, and practice need to be studied in order to ascertain the validity of such assessments in the continuum of medical education and physician training.1,2 Assuring the validity of students' clinical competence ratings is especially important because these assessments are among the major components of the dean's letter of evaluation and, as such, are used in the ranking of candidates for residency programs.

Medical schools expend considerable time and effort in preparing a dean's letter for each of their graduating students. It is based largely on the faculty's assessment of the student's academic and clinical performance. It should be one of the most important attachments to students' applications for graduate medical education. Despite this, residency directors may not attach much importance to the dean's letter,3 in part, perhaps, because they are uncertain that the information contained within it is valid for predicting performance during residency.

Previous surveys have indicated that academic criteria such as U.S. Medical Licensing Examinations (USMLE) scores, membership in Alpha Omega Alpha (AOA), the medical honor society, and class rank4,5 were rated highly as selection variables by residency directors. More recently, performance during clinical clerkships has been cited as an important factor,3,6 particularly in the specialty for which the student is applying, and especially for the most competitive residencies.7 It is thus increasingly important to confirm the validity of clerkship evaluations to assure the credibility of the dean's letter as a predictor of postgraduate performance.

The dean's letters of evaluation from Jefferson Medical College include a broad range of information (USMLE Step 1 score, second- and third-year class ranks, histogram of third-year written examination grades, clinical ratings, and excerpts from the narrative evaluations from the third-year clerkships). We have previously documented the validity of a calculated medical school class rank in predicting postgraduate performance.8,9

The purpose of this study was to examine the validity of faculty ratings of students' clinical competences in six core clinical clerkships in relation to the students' subsequent performances on medical licensing examinations and to program directors' ratings of clinical performance in the first year of residency.

Back to Top | Article Outline

Method

Study participants were 2,158 students at Jefferson Medical College who graduated between 1989 and 1998. Faculty ratings of students' clinical competences in core clerkships in the third year of medical school, scores on licensing examinations, and residency program directors' ratings of clinical competence were retrieved from the database of the Jefferson Longitudinal Study of Medical Education.10

The predictors (independent variables) included faculty ratings of students' clinical competences in six core clerkships (family medicine, internal medicine, obstetrics-gynecology, pediatrics, psychiatry, and surgery). These global ratings are part of a detailed assessment form that is completed by the clerkship coordinators at each site. The global ratings of clinical competence in each clerkship were assigned on a five-point scale currently designated as 5 = “high honors,” 4 = “excellent,” 3 = “good,” 2 = “marginal,” and 1 = “incomplete” or “failure.”

The criterion measures (dependent variables) included scores on USMLE Steps 2 and 3 and postgraduate clinical competence ratings for graduates who had given written permission for follow up (about 75% of the graduating seniors). These ratings were assigned by directors of the residency programs near the end of the first year, using a 33-item rating form. This form measures three areas of clinical competence: “data gathering and processing skills” (16 items), “interpersonal skills and attitudes” (ten items), and “socioeconomic aspects of patient care” (seven items). Each item was rated on a four-point Likert scale, and ratings were averaged within the three competence areas. Data have been reported in support of the measurement properties of this rating form, including construct validity (factor structure), the internal consistency aspect of reliability, and the criterion-related validity of the form.11,12

Scores on the USMLE Step 1 were also used to adjust the outcomes for performance differences on this examination. Bivariate correlations and multiple regression analyses were used to examine the associations between ratings in medical school clerkships and the criteria.

Back to Top | Article Outline

Results

The bivariate correlations reported in Table 1 are all statistically significant (p <.01). The highest correlations of.29 and.20 for clerkship ratings and USMLE scores were found between the internal medicine clerkship and Steps 2 and 3, respectively. The lowest correlations of.17 and.11 were observed for the psychiatry clerkship and Step 2 scores and for the surgery clerkship and Step 3 scores, respectively. Larger correlations were obtained for the internal medicine, family medicine, pediatrics, and obstetrics-gynecology clerkships than for the psychiatry and surgery clerkships.

Table 1
Table 1
Image Tools

The results of multiple regression analysis indicated that the shared variance between clerkship ratings and Step 2 scores was 14% (R2 =.14). The overlap was 7% for Step 3 scores, 12% for postgraduate ratings in data gathering and processing skills, 11% for ratings in interpersonal skills and attitudes, and 9% for ratings in the socioeconomic aspects of patient care. Each of these relationships was statistically significant (p <.01).

Inspection of the standardized regression coefficients, or beta weights, reported in Table 1 indicate that in a multivariate statistical model, competence ratings given in family medicine, internal medicine, and pediatrics clerkships contributed significantly and consistently to the prediction of all five criterion measures (p <.01). The magnitudes of the standardized regression coefficients indicate that among these clerkships, ratings in the internal medicine clerkship had the largest unique contribution in predicting three of the five criterion measures.

Ratings in the psychiatry clerkship contributed to the prediction of Steps 2 and 3 in the multivariate model (p <.05), but did not predict ratings of postgraduate clinical competence. Ratings in the surgery clerkship had a unique contribution to prediction of Step 2, and to ratings for data-gathering and processing skills and interpersonal skills and attitudes.

Additional analyses examined the total number of high-honors ratings earned by each student across the six clerkships. We classified the numbers of high-honors ratings, which ranged from 0 to 6, into the following three categories: 0 (48% of the sample), 1–3 (48% of the sample), and 4–6 (4% of the sample).

We examined the willingness of the residency program directors to offer further residency training to each resident at the end of the first postgraduate year in relationship to the number of high honors. Further residency, which is usually offered only to those who solidly meet the first-year training standards, was offered to all but 66 (5%) of the 1,401 graduates for whom data were available. We found that the proportion of graduates who would not be offered further training was the highest (6%) among those with no high-honors rating in any clerkship, followed by those with one to three high-honors ratings (3%). All of the graduates with between four and six high-honors ratings were offered further training. The association between the number of high-honors ratings and the offer of further residency training was statistically significant (χ2(2) = 9.4, p <.01).

We conducted additional analyses by adding Step 1 scores to the multiple regression models in predicting the five criterion measures reported in Table 1 to statistically adjust for differences in Step 1 scores. After adjustment, the competence ratings in internal medicine, family medicine, and pediatrics significantly predicted Step 2 scores; and competence ratings in family medicine and pediatrics significantly predicted Step 3 scores. The statistical control of Step 1 scores did not change the pattern of findings in multivariate regression analysis in which the ratings of competence in the core clerkships were the predictor and ratings of the three postgraduate clinical competence areas of “data-gathering and processing skills,” “interpersonal skills and attitudes,” and “socioeconomic aspects of patient care” were the criterion measures.

Back to Top | Article Outline

Discussion

The present study examined the validity of clinical competence evaluations assigned by medical school faculty, which are often reported in dean's letters of evaluation. Our findings suggest that faculty ratings are valid and are useful in predicting performances on medical licensing examinations and clinical competence ratings in residency. Although the faculty ratings assigned in the internal medicine, family medicine, and pediatrics clerkships yielded stronger associations with the criterion measures than did those in the psychiatry and surgery clerkships, the number of high-honors ratings that a student earned in all six clerkships was found to have a significant association with whether or not further training was offered to the graduate at the end of the first year of residency.

It should be noted that although the correlation coefficients were all statistically significant, they were not large. All fell in the range of small to moderate effect size estimates described by Cohen.13 However modest in magnitude, the consistency of the results provides credible evidence in support of the validity of the ratings.

Back to Top | Article Outline

Conclusions and Implications

Medical schools want to help each of their graduates to obtain the best residency position commensurate with his or her qualifications. However, most faculty realize that it is shortsighted to prepare a dean's letter that misrepresents a student's medical school record or excludes relevant observations of the student's performance. Obfuscation is counterproductive.14 We found that the clerkship ratings for internal medicine, family medicine, pediatrics, and obstetrics-gynecology were significantly correlated with criterion measures. These evaluations were significant predictors of performance in postgraduate training. Likewise, our findings indicate that the high-honors ratings of competence in core clerkships were significantly associated with residency program directors' decisions to offer further residency training.

The largest correlations were obtained for ratings in the internal medicine clerkship. This could be due to the fact that our students spend 12 weeks on this, and only six weeks on the others. This expanded time in the internal medicine clerkship allows for more observations and broader evaluations by a larger number of faculty and residents that could contribute to an increased overlap between this clerkship's ratings and the criterion measures.

The Association of American Medical Colleges recommended in 1989 that the dean's letter be described as a letter of evaluation rather than as a letter of recommendation.15 Many have followed this recommendation. Studies in a variety of settings have confirmed that superior performance in medical school does predict performance beyond medical school.1,10,16 Our results should not only increase the confidence of the medical school faculty with respect to their evaluations, but also reassure residency selection committees about the validity of evaluations in dean's letters as predictors of clinical competence beyond medical school. Every medical school should be committed to provide empirical support for the validity of information in its dean's letters of evaluation.

Back to Top | Article Outline

References

1. Gonnella JS, Hojat M, Erdmann JB, Veloski JJ (eds). Assessment Measures in Medical School, Residency, and Practice. New York: Springer, 1993.

2. Campos-Outcalt D, Witzke DB, Fulginiti JV. Correlations of family medicine clerkship evaluations with scores on standard measures of academic achievement. Fam Med. 1994;26:85–8.

3. Wagoner NE, Suriano JA. Recommendations for changing the residency selection process based on a survey of program directors. Acad Med. 1992;67:459–65.

4. Wagoner NE, Grey G. Report of a survey of program directors regarding selection factors in graduate medical education. J Med Educ. 1979;54:445–52.

5. Wagoner NE, Suriano JR, Stoner JA. Factors used by program directors to select residents. J Med Educ. 1986;61:10–21.

6. Villanueva AM, Kaye D, Abdalhak SS, Morahan PS. Comparing selection criteria of residency directors and physicians' employers. Acad Med. 1995;70:261–71.

7. Wagoner NE, Suriano JR. Program directors' responses to a survey on variables used to select residents in a time of change. Acad Med. 1999;74:51–8.

8. Blacklow RS, Goepp CE, Hojat M. Class ranking models for dean's letters and their psychometric evaluation. Acad Med. 1991;66(9 suppl):S10–S12.

9. Blacklow RS, Goepp GE, Hojat M. Further psychometric evaluations of a class-ranking model as a predictor of graduates' clinical competence in the first year of residency. Acad Med. 1993;68:295–7.

10. Hojat M, Gonnella GS, Veloski JJ, Erdmann JB. Jefferson Medical College's longitudinal study: a prototype of assessment of changes. Education for Health. 1996; 9:99–113.

11. Hojat M, Veloski JJ, Borenstein BD. Components of clinical competence ratings: an empirical approach. Educ Psychol Meas. 1986;46:761–9.

12. Hojat M, Borenstein BD, Veloski JJ. Cognitive and noncognitive factors in predicting the clinical performance of medical school graduates. J Med Educ. 1988; 63:323–5.

13. Cohen J. Statistical Power Analysis for the Behavioral Sciences. Hillsdale, NJ: Erlbaum, 1987.

14. Edmund M, Roberson M, Hasan N. The dishonest dean's letter: an analysis of 532 dean's letters from 99 US medical schools. Acad Med. 1999;74:1033–5.

15. Association of American Medical Colleges. Report of the Ad Hoc Committee on Dean's Letters. In: A Guide to the Preparation of Medical School Dean's Letter. Washington, DC: AAMC, 1989.

16. Hojat M, Gonnella JS, Erdmann JB, Veloski JJ. The fate of medical students with different levels of knowledge: are the basic medical sciences relevant to physician competence? Adv Health Sci Educ. 1997;1:197–207.

Back to Top | Article Outline
Section Description

Research in Medical Education: Proceedings of the Thirty-ninth Annual Conference. October 30 - November 1, 2000. Chair: Beth Dawson. Editor: M. Brownell Anderson. Foreword by Beth Dawson, PhD.

© 2000 Association of American Medical Colleges

Login

Article Tools

Images

Share