Factorial validation of a widely disseminated educational framework for evaluating clinical teachers.

Litzelman, D K; Stratos, G A; Marriott, D J; Skeff, K M
Academic Medicine: June 1998
Journal Article: PDF Only

PURPOSE: To examine an instrument for evaluating clinical teaching using factor analysis and to refine the validated instrument to a practical length. METHOD: Factor analysis on a split sample of 1,581 student evaluations rating 178 teachers. The instrument was based on the seven-category Stanford Faculty Development Program's (SFDP's) clinical teaching framework and contained 58 Likert-scaled items, with at least seven items per category plus five items measuring "teacher's knowledge." Standard methodology for survey item reduction was used to remove items with low or complex factor loadings and iteratively remove items with low item-scale correlation. Results were replicated on the second sample. RESULTS: The seven original categories emerged and items originally categorized under "knowledge" statistically combined with "promoting self-directed learning." Over 73% of the variance was explained. Item reduction resulted in 25 items with overall internal consistency over .97 and internal consistency of constructs ranging from .82 to .95. CONCLUSIONS: Factor analysis of student ratings validated the seven-category SFDP framework. An abbreviated instrument to measure the seven categories is described. Results suggest that students may not systematically distinguish between their teachers' knowledge and their teachers' ability to promote self-directed learning, an important finding for both administrators and faculty development programs.

(C) 1998 Association of American Medical Colleges