Share this article on:

Have Clinical Teaching Effectiveness Ratings Changed with the Medical College of Wisconsin's Entry into the Health Care Marketplace?


Section Editor(s): Woolliscroft, James O. MD

PAPERS: Plenary: Outstanding Research Papers

Correspondence: Dawn Bragg, PhD, Assistant Director, Office of Educational Services, Medical College of Wisconsin, 8701 Watertown Plank Road, Milwaukee, WI 53226; e-mail: 〈〉.

Medical schools, as competitors in today's health care marketplace, have the challenge of training future physicians while increasingly relying on clinical revenues.1 Is teaching compatible with competitive managed care in the future of health care.2

Skeff, Bowen, and Irby argue that teaching takes time and that its values must be re-emphasized as a core mission of medical schools.3 Medical education researchers have reported diminishing amounts of time available for physicians' educational responsibilities to both residents4 and medical students.5 Student evaluations reveal that there has been less time available for them in more recent years.6 Thus, time impacts on education have been documented, but the critical issue to be investigated is whether the quality of teaching has been compromised.

As a large, private medical school, the Medical College of Wisconsin (MCW) has not escaped the grasp of today's competitive health care environment. On December 31, 1995, the John L. Doyne Hospital (JLDH), formerly Milwaukee County General Hospital, was closed. While this facility (a primary practice and clinical teaching site) was purchased by a private adult not-for-profit hospital, it's sale nonetheless serves as a major demarcation point in MCW's transition into today's health care marketplace. Indigent care was now provided on a competitive contract basis. Our faculty formed a clinical practice group to enhance their competitive position in this evolving health care environment. Declining federal support for graduate medical education led to decreased positions in selected specialties and their associated support of medical student education. While the multi-dimensional impact of these changes on medical education, at MCW and elsewhere, will take years to analyze,7 preliminary analysis can reveal whether the quality of clinical teaching has changed during this time period. This study, therefore, examined whether there have been changes in clinical teaching effectiveness ratings as clinicians at MCW compete for patients and revenue.

Back to Top | Article Outline


The study utilized student ratings of clinical teachers from a longitudinal clinical teaching database implemented in 1992. A standard clinical teaching instrument8 is used across participating clinical departments. The instrument contains 16 characteristics of effective clinical teaching, derived from a comprehensive review of the literature, rated using a five-point Likert scale (1 = most positive). Items address faculty interaction with students (e.g., actively involved me with patients, provided timely, constructive feedback without belittling me), ability to communicate (e.g., clear, organized, answered my questions clearly), and overall teaching effectiveness. The form is highly reliable, with a coefficient alpha of.96.

Since 1992, third-year medical students have evaluated 295 full-time clinical teachers in pediatrics, internal medicine, family medicine, anesthesiology, and general surgery. For purposes of this study, the data were divided into three time periods, using 1995 as the benchmark date for MCW's entry into health care marketplace: before-entry, 1993–94; at-entry, 1995–96; and after-entry, 1997–98 (numbers of evaluations per period = 1,327, 4,354, and 6,577 respectively).

A three-stage analytic process was used to determine whether students' ratings of clinical teaching had changed during the study period. First, the 16 clinical teaching instrument items were clustered to facilitate analysis using agglomerative hierarchical cluster analysis (HCA).9 This method has been successfully used to cluster items on standardized tests into psychological dimensions.10 In HCA for an n-item test, there are n solutions or clusters. In the first step, each item comprises one cluster. At subsequent steps, the procedure combines two clusters from the previous step, based upon the proximity or similarity among each possible pairing of the clusters. The smaller the proximity value, the more similar the two clusters are believed to be. The final cluster, the n th cluster, places all of the items into one cluster. By examining the two- or three-cluster solution for interpretability, a researcher can get a nonparametric perspective on groups of items that may be considered to be dimensionally distinct. Unlike factor analysis, cluster analysis is nonparametric and is a quick way to identify possible dimensions that may exist. In this study, selected clusters of clinical teaching skills were examined for internal consistency using coefficient alpha.

Using these clusters, two-way analysis of variance was performed comparing the cluster means to determine whether (1) students' ratings varied by time period; (2) students' ratings varied by item cluster; and (3) there was an interaction effect between time periods and clusters. Individual items that had been closely associated with the availability of teaching time in previous studies were then analyzed using a one-way analysis of variance to examine differences in student ratings across the three time periods.

Back to Top | Article Outline


A three-cluster solution resulting from the HCA was selected for statistical and substantive reasons and to increase comparability of results with findings from prior factor-analytic studies. Ullian et al.,11 in their synthesis of factor-analytic studies, reported that while there are varying numbers of factors, most studies suggest four solutions. The three-cluster solution was selected for this study as the two-cluster solution contained many items that did not seem to fit qualitatively and other cluster solutions contained at least one group with fewer than four items, posing a threat to internal consistency. The three clusters were examined qualitatively to assess content validity and their relationship to Ullian's four factors.

The first cluster of clinical teaching skills was labeled supervisor/person and contained seven items: supportive of me/had rapport with me, approachable/available, actively involved me with patients, communicated expectations, demonstrated skills/procedures to be learned, provided opportunities to practice diagnostic/assessment skills, and provided feedback without belittling me. The second cluster was labeled physician/teacher and contained five items: answered questions clearly, asked questions clearly, explained basis for decisions/actions, clear/organized, and clinically competent/knowledgeable. The third group, containing four items, was labeled instructor/leader: took advantage of teaching opportunities, enthusiastic/stimulating, responded to student-initiated learning issues, and emphasized comprehension rather than factual recall. All three item clusters, supervisor/person, physician/teacher, and instructor/leader, were found to be highly reliable (coefficient alpha =.90,.86,.80, respectively). According to Ullian et al., these three clusters define the roles that clinical teachers assume in their interactions with students.

The students' ratings ranged from a minimum of 1 (most positive) to a maximum of 5 (least positive). Mean ratings across the three time periods were found to differ significantly (p <.001) (see Table 1). Post-hoc comparisons (i.e., Tukey test) revealed that the mean ratings for the periods were significantly different (all comparisons p <.001). Mean student ratings for the three clusters were also significantly different (p <.001). Throughout the before-entry, at-entry, and after-entry periods, physician/teacher skills were rated best by third-year students, while supervisor/person skills received the worst ratings (see Table 1). The analysis also showed an interaction between the time periods and the three groups (p <.001).



Mean student ratings for the three sets of skills started out positively in the first, before-entry year (see Figure 1). This was due to the fact that in 1993 faculty began to receive the first results of their clinical teaching evaluations. As reported in a prior study, when faculty receive clinical teaching evaluation results, their clinical teaching ratings improve as they immediately seek to address deficits.12 Mean ratings for supervisor/person and instructor/leader skills increased (became worse) sharply in the second year. Mean ratings for physician/teacher continued to improve throughout the before-entry years. During the at-entry period, mean ratings for supervisor/person and instructor/leader skills continued to increase (becoming worse), but the ratings increased only gradually for physician/teacher. Supervisor/person skills peaked in 1996, the year the faculty practice plan was implemented. Mean ratings for instructor/leader and physician/teacher leveled off between 1995 and 1996. The after-entry period saw improved ratings for the three item clusters. However, none of the cluster ratings returned to the before-entry baseline level.

Figure 1

Figure 1

Of particular importance were the significant differences across time periods among the mean ratings of those characteristics associated with the availability of time. For example, mean ratings of items within the supervisor/person (e.g., supportive of me, approachable/available, actively involved me with patients) followed the increased cluster ratings. However, the ratings for “provided timely, constructive feedback without belittling me,” received increasingly poor ratings across the three time periods. Analysis indicated that all four questions within this cluster were significantly different across the time periods (p <.005).

Back to Top | Article Outline


Longitudinal analysis of a clinical teaching evaluation data set reveals that the overall effectiveness of our clinical teaching decreased from a before-entry high at the time of entry in the health care marketplace. Over the at-entry study period, evaluations did gradually improve, but did not return to the before-entry baseline rate. However, not all item ratings were equally affected, with physician/teacher skills (e.g., clear/organized, clinically competent) showing the least change and supervisor/person skills (e.g., approachable, available, supportive of me, actively involved me with patients, provided timely, constructive feedback without belittling me) showing the largest decline. The supervisor/person skills, containing the interpersonal items, appear to have been the most profoundly affected by the entry into the health care marketplace.

Although it may be possible that students become more discriminating in their assessments of teaching and teachers over time, this study does not report ratings by the same students over time. This study used ratings by individual third-year classes for six years. In addition, student ratings were averaged over two years for each time period, thus minimizing huge class differences.

HCFA guidelines, increased pressures for clinical productivity, and accountability for cost-effective patient care have led physicians to repeatedly report that they have less time for clinical teaching. The results of this study suggest that there has also been a change in the quality of clinical teaching, as measured by the clinical teaching effectiveness ratings over this critical time period, a relationship requiring further study to determine causality. While it is promising that the rating results do appear to have improved following an initial decline during the at-entry period, the fact that these ratings did not return to baseline levels is distressing.

Supervisor/person skills are critical components of the teaching/learning process, as education is enhanced when there is a supportive relationship between the learner and the teacher.13 Medical schools must prepare clinical educators with teaching skills that are effective and efficient in today's time-pressured clinical environments and implement real reward structures that recognize the value of time spent in clinical teaching if we are to maintain the quality of our clinical education.

Back to Top | Article Outline


1. Bland CJ, Holloway RL. A crisis of mission: faculty roles and rewards in an era of health care reform. Change. 1995;30–5.
2. Farrell TA. Teaching and managed care: are they compatible in the 21st century? Arch Ophthalmol. 1997;115:251–2.
3. Skeff KM, Bowen JL, Irby DM. Protecting time for teaching in the ambulatory care setting. Acad Med. 1997;72:694–7.
4. Bolognia JL, Wintroub BU. The impact of managed care on graduate medical education and academic medical centers. Arch Dermatol. 1996;132:1078–84.
5. Nordgren R, Hantman JA. The effect of managed care on undergraduate medical education. JAMA. 1996;275:1053–8.
6. Xu G, Wolfson P, Robeson M, Rodgers JF, Veloski JJ, Brigham TP. Students' satisfaction and perceptions of attending physicians' and residents' teaching role. Am J Surg. 1998;176:46–8.
7. Xu G, Hojat M, Veloski JJ, Gonnella JS. The changing health care system: a research agenda for medical education. Eval Health Prof. 1999;22:152–68.
8. Schum TR, Yindra KJ, Koss R, Nelson DB. Students' and residents' ratings of teaching effectiveness in a department of pediatrics. Teach Learn Med. 1993;5:128–32.
9. Kaufman L, Rousseeuw PJ. Finding groups in data: an introduction to cluster analysis. New York: Wiley, 1990.
10. Stout W, Habing B, Douglas J, Kim HR, Zhang J. Conditional covariance-based multidimensionality assessment. Appl Psychol Meas. 1996;20:331–54.
11. Ullian JA, Bland CJ, Simpson DE. An alternative approach to defining the role of the clinical teacher. Acad Med. 1994;69:832–8.
12. Schum TR, Yindra KJ. Relationship between systematic feedback to faculty and ratings of clinical teaching. Acad Med. 1996;71:1100–2.
13. Palmer P. The Courage to Teach. San Francisco, CA: Jossey-Bass, 1997.
Back to Top | Article Outline

Section Description

Research in Medical Education: Proceedings of the Thirty-ninth Annual Conference. October 30 - November 1, 2000. Chair: Beth Dawson. Editor: M. Brownell Anderson. Foreword by Beth Dawson, PhD.

© 2000 by the Association of American Medical Colleges