Share this article on:

How Best to Evaluate Clinician–Educators and Teachers for Promotion?

Glick, Thomas H. MD

Special Theme: Faculty Development: SPECIAL THEME ARTICLES

The challenge of how best to evaluate educational scholars (and specifically, clinician—educators) and teachers for promotion continues to confront academia. While the work of educational scholars and teachers often overlaps, the terms for justifying their promotion differ substantially. In each case, the author maintains that evaluation should be oriented to evidence of the impact of their work. Educational scholars can be assessed mainly by objective impact, whereas the evidence for the impact of teachers should include profound, subjective effects on individual learners. For example, for clinician—educators engaged in scholarly work, the impact of that work can be identified in terms of changes in educational methods, career commitments, and practices (all intermediate outcomes), and even health outcomes. For teachers, in addition to customary criteria such as critical thinking, depth of knowledge, communication ability, and personal engagement, learners can be asked about the deep influence of these teachers.

The author states his case for these principles, and also presents an innovative tool, the “impact map,” as a way of graphically portraying the track record of an individual clinician—educator. Such maps are more vivid than narrative testimonials in organizing and displaying evidence of impact over time. This tool, combined with the author's other suggestions to assist the promotion process for educators and teachers, is aimed at fostering a greater emphasis on outcomes in assessing both clinician—educators and teachers to achieve greater rigor and fairness.

Dr. Glick is chief, Division of Neurology, Cambridge Health Alliance, Cambridge, Massachusetts, and associate professor of neurology, Department of Neurology, Harvard Medical School, Boston, Massachusetts.

Correspondence and requests for reprints should be addressed to Dr. Glick, Cambridge Hospital, 1493 Cambridge Street, Cambridge, MA 02139; telephone: (617) 665-1017; fax: (617) 665-1671; e-mail: 〈〉.

The challenge of how best to evaluate clinician—educators and teachers for promotion continues to confront academia.1 While the work of educational scholars (hereafter called “educators,” a group that includes clinician—educators) and that of teachers often overlap, the terms for justifying their promotion differ substantially. In each case, evaluation should be orientated to outcomes. However, as I will suggest, educators can be assessed mainly by their objective impacts, whereas the evidence for the impacts of teachers should include profound, subjective effects on individual learners.

Most educators teach, and many teachers contribute beyond the classroom or bedside, but, in the main, an educator advances the field, while a teacher enables individuals directly. The best teachers inspire and potentially transform the learner, as in the child whose second-grade teacher “gave me a love of reading,” or the medical student whose preceptor “led me to become more reflective and to appreciate the power of empathy.” To distinguish the types of impacts that educators and teachers can have is not to drive a wedge between them, but to focus on the kinds of evidence that will best advance their respective candidacies for promotion, especially at the higher levels, which have been most problematic.

For clinician—educators engaged in scholarly work we can identify impact in terms of changes in educational methods, career commitments, and practices (all intermediate outcomes), and even (potentially) health outcomes. For teachers, in addition to customary criteria, such as critical thinking, depth of knowledge, communication ability, and personal engagement, we can elicit from the learners the deep influence of these teachers. While a scholarly, investigative spirit contributes greatly to good teaching, Boyer's broadening of the definition of scholarship to include teaching2 may not help us to document outcomes such as the transformational impact of the best teaching. An outcome orientation does not, however, generate hierarchies of importance or scales for weighing impact. Vexing individual judgments must rest in the hands of responsible bodies, but their eyes should be on impact, not recognition per se.

Back to Top | Article Outline


As background for my suggestions to modify evaluation for promotion, I will contrast evidence of “recognition”—frequently acknowledged as the key to higher promotion1,3—with evidence of a candidate's impact. In the absence of outcome-oriented criteria (other than publication) for educational scholarship, the name of the high-stakes promotion game is still recognition. Reputation looms large in rating the players, but does not necessarily indicate added value. Published scientific scholarship readily translates into recognition, with the collateral effects of invited, high-profile roles, such as lectureships or memberships on national committees. For promotion of educators it is tempting to adopt the achievement of such roles as surrogate evidence of productivity, at least partly because finding alternatives or complements to sheer published output is so difficult.

The criterion of recognition breeds strong incentives for self-promotion. Appearances, often garnered through skillful networking, produce name recognition, but leadership of promotional value should be accountable. In education we should measure results, not roles, just as laboratory investigators are rewarded for their productivity, not their activity. While valuation based on clinician—educator output is thought by some to be insufficiently rigorous, a reputation-based standard itself lacks rigor. Clinician—educators deserve evidence-based promotion—evidence of innovation and impact. Yet impact has been mentioned only in passing in articles on promotion of clinician—educators and teachers.3

Back to Top | Article Outline


What kind of evidence, then, will suffice for accomplishments in educational scholarship, analogous to the expectations used to evaluate laboratory investigators? An educator's innovation, such as Lawrence Weed's creation of the problem-based medical record, is the creative seed of that individual's future impact, which may be advanced through his or her publications, although not necessarily in “high impact” journals.4 Other creative initiatives (see below for examples) have resulted in substantial contributions, but these are largely unrecognized via publications.

To illustrate types of productive outcomes, many individuals working on evidence-based medicine contribute in one or more of Boyer's scholarly domains by discovery of evidence, integration into guidelines, teaching their basis and use, and evaluating practice applications. Various clinician—educators have successfully reached beyond continuing medical education to “go public” with campaigns for changes in health and wellness behaviors and in symptom recognition, as in accident prevention, smoking cessation, and screening for depression. The innovative use of robotic simulators for clinical training on the “anesthetized patient,” for example, can be measured by improved practices. Better procedural skills, which represent intermediate outcomes, enhance patient safety, an important health outcome.

Whether the results are considered process or outcome depends, in part, on whose perspective is adopted (the learner, the practitioner, or the patient), but results they are, not just activity. At each step, evidence of impact may be counted for promotion. The initiatives of clinician—educators need not attain a high national profile, but simply should be valued by their results, which are achieved in many more local and smaller but still important ways.

Back to Top | Article Outline


In contrast to the promotion of educators, the challenge for promotion of teachers does not lie primarily in measuring changes in institutions, practices, or methods. Rather, the task is to understand how the enlightening and transformative effect of the best learner-directed teaching (mostly local) can be fairly attested. Standard teacher-evaluation tools build the foundation of assessment, but cannot quantify the depth of impact that is the gift of great teaching. Transformative teaching includes stimulating not only a love of learning, but also new abilities to “think out of the box,” and to modify paradigms (such as reorienting approaches to patient-centered care and learning, or adopting more diverse cultural reference points). Profound impacts may occur quite unexpectedly, often through authentic, unselfconscious role modeling.

If the hallmark of the very best teaching is personal inspiration or transformation, the primary source of evidence should be the testimonials of the students who grew as a result of this learning relationship. In many schools, including my own, only unsolicited letters from students or else-where may be entered by a candidate into the promotion portfolio, presumably to avoid the taint of favoritism or the distortions of unequal power relationships. Yet this approach has its drawbacks, for who else but the student would know of the candidate's profound personal influence? And how would the student realize that the esteemed teacher needs support if the student is not asked for it? The best regulatory safeguard against potential conflict of interest if students are asked for letters is full disclosure. Let the candidate teacher name names and even request letters directly, but the rules would require that the responding learner make clear the nature of the relationship and any bias or potential personal gain. The method of qualitative research makes possible the accumulation and evaluation of data that are insufficient for quantitative analysis but that can demonstrate detail and depth. Testimony from multiple sources, independently gathered, can be synthesized to document an authentic impact.

More broadly, teachers' needs for testimonial support should be formally communicated proactively to all classes of learners: students, residents, and peers. In fact, detailed, qualitative expression of freely given support (or other constructive feedback, positive or negative) should be regarded by learners as a responsibility analogous to teachers' responsibility to supply feedback and recommendations. Surveys assessing teaching, included in course evaluations by students, should contain sections designed specifically for promotional use, in addition to items oriented to course-specific feedback. In short, the promotion process will benefit from the proactive collaboration of learners, teachers, and administrators (course heads, department chairs, promotion committees).

Back to Top | Article Outline


Portfolios dedicated to teaching or educational performance have emerged as tools in the promotion process. In contrast to outcomes of teaching, the documentation of which can be capped by narrative testimonials, the portrayal of an educator's impact may benefit from a new tool that I call an “impact map.” Such a map charts the “track record” of an educator's innovation, from the original creative idea to changes in practice. One can visualize such a map's being used to trace a familiar example of educational innovation, R. M. Harden's objective structured clinical examination (OSCE), showing via a timeline the progression from the original concept to changes in practice.5 The map would begin in the 1970s with the development of an assessment process, with subsequent intermediate outcomes involving students' or residents' promotion, program evaluation, specialty credentialing, and most recently, licensing exams that affect not only those who practice but eventually those who are their patients.

To further illustrate the concept of an impact map, I have created two maps (see Figures 1 and 2) to chart the impacts of two anonymous candidates for professorial promotion (both educators) whose contributions have been largely “behind the lines” of published scholarship. Despite a relative lack of the traditional types of productivity, each of them has achieved important outcomes by means of curricular development, advanced training of educators, and making possible innovative projects intended to change professional practices and eventually the health status of populations. One (Dr. “X”) is the cofounder and director of a faculty development project that has drawn hundreds of clinician—educator “scholars” from national and international medical schools over a seven-year period. Participants have designed and implemented practical projects, many of which have affected teaching and patient care at their home institutions. Figure 1 presents a timeline that shows the progress from an innovative idea, successful grant application, and collaborative development between the medical school and the education school to successive cohorts of educators. They, in turn, have initiated networking and regional conferences, career commitments to medical education, and, most importantly, a succession of educational projects in their local environments. Thus, through an outcome orientation, the promotion committee might better visualize the connection between this innovation, the candidate's impact on the participants, and the impacts of the participants' projects.

Figure 1

Figure 1

Figure 2

Figure 2

The second example, depicted in Figure 2, involves a subspecialist (Dr. “Y”), the head of a medical school course, whose larger endeavor is directing an international program of educational alliances and reform. This work has required skilled and innovative leadership in modeling and facilitating a sea change in curriculum and teaching at several international medical schools. Dr. Y's participation in a single program grew into conceptualizing and directing multiple collaborations. Later developments include planning international regional centers of excellence in medical education and seeking meaningful connections between medical and other faculties, such as pedagogy, economics, and biotechnology. Intermediate outcomes measurable as changes in curriculum, teaching, and assessment represent the most tangible results to date, with the more profound effects on educational and practice cultures still to be seen.

Each of these impact maps represents just one way of illustrating the candidate's work. Substantial detail (not included here) should be supplied and referenced by attached personal communications and published information. For the entirety of an educator's output, Boyer's four domains of scholarship could serve as an organizing principle.

Two questions may arise: First, could these outcomes be expressed in words, rather than “maps”? Yes, but a graphic depiction may draw attention more forcefully to an outcome orientation and demonstrate more clearly how the educator's leadership has carried innovation forward over time to value-added results. Intersecting paths on a consolidated map may suggest how scholarly, leadership, and teaching activities (often overlapping departmental lines) can contribute to overall impact. Just as important as the benefit to promotion committees will be the opportunity to focus the attention of rising educators on outcomes, rather than simply attracting recognition.

Second, can impact maps become widely applicable? My hope is that readers will try out impact maps as another way to document and vividly illustrate the accomplishments of educators and teachers in their institutions. Then time will tell whether the maps become a customary tool in assessing educators' and teachers' impacts.

The examples of impacts provided here have been chosen to dramatize educators' activities on a large stage, but all of these activities started as small collaborations and demonstration projects. They grew because they made a difference, as did the innovations of Lawrence Weed, R. M. Harden, and many others. The field of medical education remains fertile ground in any locale for any educator to sow innovation and reap positive change in educational or health practices.6 A constructive accomplishment at the end of a short, local path should be perceived and counted as outcome evidence, valuable in itself, but also potentially generating progress on a larger scale.

The specific suggestions I have put forward for assisting the promotion process for educators and teachers arise from a need for reorientation towards perceiving and measuring evidence of impact. If educational scholars and leaders grasp this opportunity to emphasize outcomes of the work of educators, especially clinician—educators (with or without the illustrative aid of impact maps), they may achieve a more secure connection to the credibility associated with laboratory and clinical investigation. In the case of teachers, the transformative impact of the best teaching opens channels to testimony from the learners, which can be compelling in the aggregate. Within these frameworks, future candidates and their faculties, working together proactively, may be able to join greater rigor with greater fairness and stimulate ongoing commitment to the educational enterprise.

Back to Top | Article Outline


1. Levinson W, Rubenstein A. Integrating clinician—educators into academic medical centers: challenges and potential solutions. Acad Med. 2000;75:906–12.
2. Boyer EL. Scholarship Reconsidered: Priorities of the Professoriate. Princeton, NJ: The Carnegie Foundation for the Advancement of Teaching, 1990.
3. Lubitz RM. Guidelines for promotion of clinician—educators. J Gen Intern Med. 1997;12(2 suppl):S71–S77.
4. Weed LL. Medical records, patient care and medical education. Irish J Med Sci. 1964;6:271–82.
5. Harden RM, Stevenson M, Downie WW, Wilson GM. Assessment of clinical competence using objective structured examination. BMJ. 1975;1:447–51.
6. Magill MK, Quinn R, Babitz M, Saffel-Shrier S, Shomaker S. Integrating public health into medical education: community health projects in a primary care preceptorship. Acad Med. 2001;76:1076–9.
© 2002 Association of American Medical Colleges