Skip Navigation LinksHome > December 2000 - Volume 75 - Issue 12 > Using Formal Evaluation Sessions for Case‐based Faculty Deve...
Academic Medicine:
Educating Physicians: Essays

Using Formal Evaluation Sessions for Case‐based Faculty Development during Clinical Clerkships

Hemmer, Paul A. MD; Pangaro, Louis MD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Hemmer is assistant professor of medicine and associate director of the internal medicine clerkship and Dr. Pangaro is professor of medicine and vice chairman for educational programs, both at the Uniformed Services University of the Health Sciences, F. Edward Hebert School of Medicine, Bethesda, Maryland.

Correspondence and requests for reprints should be addressed to Dr. Hemmer, Uniformed Services University of the Health Sciences, 4301 Jones Bridge Road, Bethesda, MD 20814; e-mail: 〈phemmer@usuhs.mil〉.

The authors thank Kelley M. Skeff, MD, PhD, for his review of the manuscript and his insightful and constructive feedback.

The opinions expressed in this paper are solely those of the authors and do not necessarily reflect the opinions of the Department of Defense, the United States Air Force, or other federal agencies.

Collapse Box

Abstract

Developing housestaff and faculty in their roles as medical educators is a dynamic process. The rigorous clinical evaluation method used during the third-year internal medicine clerkship at the Uniformed Services University uniquely incorporates faculty development into the process of evaluation and generating feedback for students. Formal evaluation sessions are held monthly at all clerkship sites throughout the 12-week clerkship and are moderated by either the internal medicine clerkship director or the on-site clerkship directors. Although designed to provide an opportunity for faculty to evaluate student performance and prepare formative feedback, the sessions also function as formal, planned, and longitudinal forums of “real-time,” “case-based” faculty development that address professional, instructional, and leadership development. The evaluation sessions are used as a means to model and teach the key concepts of the Stanford Faculty Development Program. Providing a unifying form of evaluation across multiple teaching sites and settings makes formal evaluation sessions a powerful, state-of-the-art tool for faculty development.

Our prior work has demonstrated that a rigorous clinical evaluation process that couples formal evaluation sessions with a descriptive, synthetic evaluation framework1 has substantial merit. This process can achieve a reliability similar to that of written examinations2 and enhances the identification of marginally performing medical students.3–6 Beyond its role in achieving credible descriptive evaluations of trainees, this evaluation process allows clerkship directors to provide professional development for housestaff and faculty—the key providers of clinical education to medical students.7 This is of particular importance, since the changing context of medical education necessitates that faculty development initiatives meet the needs of a growing and diverse group of instructors.8

One such initiative directed at improving teaching skills is the Stanford Faculty Development Program (SFDP), which encompasses the aspects of the teacher-learner interaction in an educational framework.9 The key concepts of the SFDP are: learning climate; control of session; communication of goals; understanding and retention; evaluation; feedback; and self-directed learning. The formal evaluation sessions at the Uniformed Services University (USU)1,6 provide to internal medicine clerkship instructors a unique program through which the key SFDP concepts are consistently modeled and explained. In fact, the investment in the evaluation sessions yields a “3-for-1” return: they help achieve credible, consistent evaluation at all teaching sites1,2; they generate formative feedback for each student; and they develop the educational skills of the participants through ongoing “case-based” discussions of the medical students with whom they have been working. The evaluation sessions allow us to provide faculty development for both novice instructors and experienced teachers. In this article, we use the SFDP concepts and definitions as a framework (Table 1) to describe how the formal evaluation sessions are used for faculty development and illustrate the vital role they can play for clerkship directors.

Table 1
Table 1
Image Tools

However, we want to be clear from the start that in order to implement formal evaluation sessions, one need not have experience with, or be trained in the use of, the SFDP. Terminology and concepts from other methods of faculty development may be more applicable to a specific medical school and/or clerkship.10

Back to Top | Article Outline

BACKGROUND

The third-year internal medicine clerkship at the USU is 12 weeks long, with a six-week ambulatory care rotation and a six-week inpatient medicine ward rotation.11 Administratively, the internal medicine clerkship is overseen by the Department of Medicine chair, the vice chair for educational programs (VCEP), the internal medicine clerkship director, and individual on-site directors at the clerkship's seven teaching sites. There are seven widely separate core teaching sites (one site each in Virginia, Texas, Ohio, and Hawaii, and three sites in Washington, D.C.). Formal evaluation sessions are held monthly at all clerkship sites. These are planned meetings at which the on-site clerkship director (CD) meets with all of the instructors who are working with students in the clerkship. Fifteen minutes is allocated for discussing each student, and the on-site CD makes notes of each instructor's comments. The following day, the CD meets with each student individually to provide specific feedback, highlighting what the student needs to do in order to take the “next step” in his or her progress as a learner. These evaluation and feedback sessions have been an integral part of our clerkship since its inception.6

The evaluation sessions occur in “real time” during the clerkship and are a primary feature of a three-phase clerkship process that also includes orientation meetings with instructors prior to the clerkship and detailed critiques (based on SFDP key concepts) completed by students after the clerkship. However, the evaluation sessions also provide case-based “teaching about teaching” that complements other formal presentations and seminars. Our premise is that adult learners are most invested in learning skills that they can use immediately. For example, the most effective learning about congestive heart failure (CHF) occurs while physicians are caring for patients with CHF. Therefore, we use students in the clerkship as the “cases” for the formal evaluation sessions.

Formal SFDP seminars were introduced in the Department of Medicine at the USU in 1987 and are offered regularly at each of the clerkship training sites. Most faculty-level instructors have been participants. As part of their specific orientation, each on-site CD (1) completes the SFDP seminars and (2) learns to run the evaluation sessions through a program of observation and practice. Further training and discussions with the VCEP and the medicine clerkship director occur through phone calls, e-mail messages, and monthly clerkship working group meetings (remote site coordinators attend at least four of these meetings per year).

Using the SFDP concepts as the framework, we explain how we and the other on-site CDs model and teach each concept during the evaluation sessions.

Back to Top | Article Outline

FORMAL EVALUATION SESSIONS AS FACULTY DEVELOPMENT

I. Learning Climate

Creating a positive learning climate involves teacher behaviors that (1) stimulate learners (through enthusiasm for the topic and the learners), (2) involve learners (looking at and listening to learners, encouraging participation), (3) show respect and provide a comfortable environment (using learner's names, acknowledging learners' problems, inviting learners to voice opinions, and respecting divergent opinions), and (4) admit their own limitations (inviting learners to raise questions and avoiding being dogmatic).

To establish and model a positive learning climate, on-site CDs use a private conference room and provide snacks such as coffee and cookies. We usually thank instructors for taking time to come to the sessions and emphasize that their role with the students is key to a successful clerkship. Although all evaluators for a given student meet at the same time, comments are obtained first from the interns, next from the resident, then from the attending physician, and finally from the clerkship preceptor (a staff physician who works with the same three to four students for six weeks in a small-group tutorial process). In this way, the interns do not feel they must conform their comments to those of the other evaluators. Each evaluator addresses key areas of the student's performance and is allowed to speak without interruption. This allows the on-site CD to focus on the unique observations of each evaluator, to reinforce the value of each person's perspective, and to show respect for divergent opinions.

Each of the on-site CDs must adapt to the varying needs of different levels of instructors. For example, interns may feel “too close” to being medical students and may be uncomfortable with their new roles as teachers and evaluators. We supportively discuss intern's concerns while helping them to be honest in their evaluations. Their perspective, like those of all other participants, is always respected and we do not pressure them to change their evaluations.

Meeting with evaluators allows us to pick up on subtle verbal and nonverbal cues. By recognizing and addressing these cues, we foster an environment in which evaluators can discuss difficult areas or their uncertainties about a particular observation. Simply stating, “It seems you might have some concerns about this student” is often enough to spark conversation.

We, in turn, teach instructors to focus on establishing an appropriate learning climate for their students.12 We particularly encourage behaviors that promote student involvement. For example, we emphasize to the instructors the importance of assuring students that they should concern themselves only with striving to be reasonable in their assessments and decisions—they do not have to be right. We explain how this simple message can immediately relieve a student's anxiety and create a willingness to explore possibilities without fear of retribution.

Back to Top | Article Outline
II. Control of Session

To maintain control of a teaching session, instructors must (1) adapt their leadership styles (from being directive, democratic, or nondirective) to fit the educational purpose (2) pace the session (by calling attention to time, changing the pace of the discussion, covering all scheduled topics), and (3) focus the session (by setting an agenda, limiting distractions, staying on the topic).

We model the ability to focus and pace the evaluation sessions by minimizing distractions, establishing an agenda, and calling attention to the time restriction of approximately 15 minutes of discussion for each clerkship student. Each evaluator is given several minutes to make his or her observations about the student and only then will the on-site CD facilitate or direct, as necessary, the focus of the discussion. We model flexibility by asking whether there is a particular student whose discussion will require more time. In such cases, we begin by discussing this student, elicit necessary details, and arrange a separate meeting with the evaluator(s) if needed.

We use the evaluation sessions to teach instructors how and when to be directive in assigning student tasks on rounds, seek students' input into daily agendas, and set aside time to meet with students to answer questions and give feedback.

Back to Top | Article Outline
III. Communication of Goals

Communication of goals involves (1) establishing goals (defined as learner behaviors), (2) expressing goals (stating goals clearly and concisely, highlighting their relevance to the learners and the expected level of competence, and reiterating goals), and (3) negotiating goals (asking the learners for their goals, prioritizing goals for the session).

At every evaluation session, we reiterate clerkship goals, which include the growing independence of the students and their progression from being “reporters” on their patients to being “interpreters,” and then to being “managers” for their patients and/or “educators” of their colleagues.1 These steps are introduced to the housestaff and faculty at their monthly orientations to the teaching service and also through the clerkship evaluation forms. The goals are clear, concise, and described as student behaviors, and they provide a “diagnostic” framework that the housestaff and faculty can apply to each student, whether in the ambulatory care or the inpatient setting. Specific examples of student behaviors that characterize each level of achievement are presented at the evaluation sessions to help guide the participants.

This vocabulary of R-I-M-E (Reporter, Interpreter, Manager/Educator) becomes part of the lexicon of the instructors and is, of course, central to evaluating the students. The evaluation sessions provide time to discuss ways in which the students can take the “next step” and how instructors can and must then clearly relate this progression in their conversations with the students. Thus, we make explicit the linkage of evaluation and feedback to communication of goals.

Furthermore, we communicate to the instructors our goals for them. This includes clarifying their specific role with the students and reinforcing our expectations of their daily interactions with students. For example, the interns are expected to review and provide feedback on the students' daily progress notes and presentations; the residents are expected to assign patients, review written histories and physical examinations, and foster the students' progression to the “next step” during work rounds; and attending physicians are expected to allow the students to present the patients they have evaluated and offer their own opinions about cases.

We work with instructors to teach them how to use a variety of methods and behaviors for communicating goals to students. Examples include providing students with written guidelines and expectations, asking students what they would like to learn about on a specific patient, and setting higher goals when students are ready to take the “next step.”

Back to Top | Article Outline
IV. Understanding and Retention

To enable students to understand content and retain what they learn, instructors must (1) organize material (using overviews, transitions, enumeration), (2) clarify learning issues (using examples, defining terms, explaining relationships), (3) emphasize important information/skills (through repetition, visual aids), and (4) foster active learning (having learners apply the material to their own experiences; providing or referring to the literature).

At each evaluation session, we prominently display on a writing board the clerkship goals and areas of evaluation in such a way that the instructors can easily refer to them during their discussions. Also, the case-based discussions at the evaluation sessions provide a lasting impression and reference points for the participants. The process directly parallels the ways they learn about medical problems: instructors recall specific students and their evaluations, just as they vividly recall specific patients and their diagnoses.

The sessions provide an opportunity for “active learning” since we apply teaching concepts directly to the instructors' own students. For example, we emphasize the importance of definitions, both in clerkship goals and in medical terminology, and might “compare and contrast” performance at the reporter level with that at the interpreter level. Often there are several students working with the same instructors and their performances can help illustrate the differences. Finally, these concepts are further reinforced after the evaluation sessions since instructors apply their knowledge while the student is still in the clerkship.

Back to Top | Article Outline
V. Evaluation

Evaluation involves (1) observing, (2) questioning, and (3) fostering self-assessment in learners. With regard to questioning, instructors should consider (1) the types of questions to be asked (open-ended, closed), (2) the amount of “wait time” to give a learner to respond to a question (allowing several seconds after posing a question for the learner to respond), and (3) levels of questions (recall questions focus on recollections of specific knowledge, skills, or attitudes; analytic or synthetic question require the learner to demonstrate understanding; application questions require the learner to apply content to a specific case).

The evaluation sessions of course focus primarily on this element of the SFDP. Based on the timing of the sessions (every three or four weeks), we explain their use for both formative and summative evaluation of the students.1 The on-site CD helps instructors move beyond a minimal level of descriptive evaluation, such as “good” or “fine” by modeling more detailed, behavior-based descriptions of student performance. The CD accomplishes this by using a mix of open-ended and directed questions (e.g., “How is the student doing?”; “What specific aspects of his or her progress notes need improvement?”) and wait time (giving instructors ample time to respond to these questions). We ask instructors to recall specific examples of students' knowledge, skills, and attitudes, to synthesize these comments within the R-I-M-E framework, and then to apply their understanding of these issues to an individual student. We model a higher level of evaluation by asking instructors to identify the “next step” for the student in his or her development as a physician; we then can clearly link evaluation to feedback. There is also a level of self-evaluation that occurs during these sessions. With the on-site CD's assistance, instructors often recognize their own transitions from “reporting” their observations, to “interpreting” them and then to “managing” student education, which further promotes understanding and retention. We make explicit the need for direct observation (e.g., of progress notes, patient interactions) and questioning (e.g., asking students to offer explanations for a patient's fever) as powerful tools for evaluation.

Back to Top | Article Outline
VI. Feedback

Key characteristics of valuable feedback include that it is specific (based on behaviors), frequent, timely (with attention to the appropriate setting), balanced (both reinforcing and corrective), and clearly labeled as feedback. Instructors should also take into account learners' reactions to their comments and help them to develop action plans based on the feedback. Feedback may be minimal (“right/wrong”, “good/bad”), behavioral (explaining the reasons the behavior is correct or incorrect), and/or interactive (such as allowing the learner to react to feedback, encouraging self-assessment, and developing an action plan with the learner).

At the evaluation sessions, the on-site CD models good feedback technique by providing immediate feedback to housestaff and faculty on their use of the R-I-M-E framework and their evaluations of students. We ask for clarification of comments and direct their attention to specific issues, as well as emphasize and explain the strengths or shortcomings of instructors' comments. For example, an evaluator may describe a student who is consistently ill-prepared and late for rounds but then offer him or her a grade of “high pass” because they believe he or she is “trying.” The CD emphasizes goal-based evaluation by asking, “If you feel this student is a `high pass,' what would describe a `low pass' or `pass'?” We emphasize the importance of key observations of student performance, whether positive or negative, and illustrate how these observations substantiate certain levels of achievement. In this way we illustrate how feedback is tied to evaluation as well as understanding and retention.

We teach that feedback should be interactive, timely, specific, constructive, and nonjudgmental. Through this, we hope to help instructors give useful feedbacks to students. For example, if a resident reports that a student's written histories lack appropriate detail and focus, we can ask the resident, “What have you told the student?” We use this opportunity to stress the need for students to receive detailed formative and summative feedback from each instructor and examine ways, either individually or as a group, in which this might be done. Additionally, the on-site CD passes along all feedback to each student following the evaluation sessions to ensure that our responsibilities to the students are fulfilled.

Finally, housestaff and faculty are invited to give feedback to the CD during the evaluation sessions, that is, to express their concerns or suggestions for improving the clerkship. This provides another opportunity for instructors to practice the key components of effective feedback.

Back to Top | Article Outline
VII. Self-directed Learning

Self-directed learning is achieved/promoted when (1) the learner relies less on the teacher for direction, (2) the learner is treated as an active participant, (3) learning is relevant to the learner's needs, and (4) the learner's experience provides the internal motivation for learning.

At the evaluation sessions, we use the discussions of students as an opportunity to ask the teachers, particularly the housestaff, whether there are any educational issues they want to discuss. We try to use evidence from educational research to address instructors' concerns, but we are open to discussing situations for which we do not have empirical data or other evidence. In such cases, we explore ways to answer the question. For instance, an intern's inquiry about the effect of learning climate on student's perceptions led to a brief research report.12

With regard to the students, we discuss whether they can identify appropriate questions about their patients and to what extent they are independently reading. We also ask teachers how they model and teach self-directed learning, such as whether they answer questions that arise during a busy clinic day, or encourage students to probe for evidence in controversial areas.

Back to Top | Article Outline

DISCUSSION

The goal of faculty development initiatives is to address areas essential for the growth and performance of faculty members.13,14 Certain factors are likely to make these programs and their implementation more effective.13–15 Faculty development should be a planned activity with defined goals and expectations for the participants, and concepts should be reinforced in ongoing or subsequent meetings. Also, program leaders should seek participants' input so that the program can evolve to better meet their needs, and there should be an attempt to measure the effectiveness of the program at accomplishing its objectives. Further, a faculty development program should have a practical application, convey the goals in language that is simple and consistently applied and use a format that resonates with participants, such as case-based learning for clinician-educators.

We believe our formal evaluation sessions meet these objectives. They provide a component of professional and instructional development10 for new as well as more experienced instructors by modeling and teaching well-recognized skills.9 The sessions also address the long-standing, nationally cited concerns of evaluators.16 Specifically, we clearly define each person's role, provide training in the evaluative process, repeatedly reinforce the clerkship objectives and evaluation criteria, and apply them in a format that goes beyond orientation memos or departmental meetings.7 During a six-week rotation with the medical students, each evaluator spends approximately one hour participating in one or two evaluation sessions. Since evaluators work with students repeatedly throughout an academic year, we can expect to meet with individual interns for a total of six to eight hours, with residents for two to four hours, and with staff physicians for two to three hours. This allows for intensive instruction with the least experienced evaluators and allows senior colleagues to serve as role models while receiving their own instruction.

Leading the evaluation sessions also provides a degree of leadership development for our on-site clerkship directors.10 All on-site CDs have participated in the formal SFDP and have also received additional training from the VCEP. Thus, at the local sites, these individuals become academic leaders and gain educational expertise. Each on-site CD participates in clerkship policy and procedural decisions and is a member of the professional organization for internal medicine clerkship directors.5,17

During the 12-week clerkship, the on-sire CDs invest 45-60 minutes per student in the evaluation sessions (and a similar amount of time giving individual feedback), which is similar to the amount of time that internal medicine clerkship directors spend in the grading process,17 and is also consistent with published expectations for clerkship directors.18 However, our evaluative process occurs during the clerkship and triples the return on investment by providing evaluation, feedback, and faculty development. We have demonstrated that using the evaluation sessions results in remarkably reliable descriptive evaluations,2 an improved ability of instructors to identify students with marginal funds of knowledge,3 the generation of detailed, highly specific comments regarding students' professional demeanors,18 and an enhanced detection of deficiencies in professionalism.4

Instructors must attend and participate for the evaluation sessions to be successful. Across all our clerkship sites, the attendance rates at the evaluation sessions vary from approximately 60% on the ambulatory care rotation to approximately 85% on the ward rotation.3,4 Common reasons for not attending include housestaff days off (because of residency requirements), conflicts with scheduled clinics or other administrative duties, and poor calendar keeping (instructors who do not make note of the dates and times). The reason for the lower attendance at the ambulatory evaluation sessions is nuclear. This may reflect that the ward team sees itself as a more cohesive unit or, perhaps, that clinic attending physicians find it difficult to set aside 15 minutes or may assume (erroneously) that they have less to learn from the evaluation sessions. Strategies for improving attendance include stating the dates of the sessions at orientation meetings, sending e-mail reminders, making announcements at scheduled conferences such as morning report, and scheduling sessions for such convenient times as the lunch hour for ambulatory attending physicians. Nevertheless, even with rates of attendance lower than our evaluation form completion rates (approximately 90–95%), the evaluation sessions have improved the identification of marginally performing students.3–6 Clearly, not all aspects of faculty development can or will be covered with each instructor during an evaluation session. Much of what is accomplished is done over the course of several sessions. Nevertheless, it is remarkable what can be accomplished at a minimum at each evaluation session: one can create an appropriate learning climate; set and maintain control of the session; clearly communicate goals for the clerkship and for individual instructors; strive for evaluation that is detailed, specific, and behavior-based; provide immediate feedback to instructors; and use the individual students being discussed as the springboard to teach one or more of the elements of the SFDP.

Back to Top | Article Outline

CONCLUSION

The credible evaluation of medical students is a demanding and time-intensive but critical process and is inseparably linked to the development of the teaching and evaluation skills of housestaff and faculty. The evaluation sessions at the USU provide a formal, planned, and longitudinal format for student evaluation and feedback that simultaneously provides a unique form of real-time faculty development during a clinical clerkship. This triumvirate of evaluation, feedback, and faculty development makes the evaluation sessions a powerful tool for clerkship directors.

Back to Top | Article Outline

REFERENCES

1. Pangaro L. A new vocabulary and other innovations for improving descriptive in-training evaluations. Acad Med. 1999;74:1203–7.

2. Pangaro LN, Jamieson T, Hemmer P, Gibson KF, DeGoes JJ. Descriptive Clinical Evaluation Can Achieve Reliability Comparable to Standardized Tests. Presented at the Association for Medical Education in Europe Conference, Vienna, Austria, September 1997.

3. Hemmer PA, Pangaro LP. The effectiveness of formal evaluation sessions during clinical clerkships in better identifying students with marginal funds of knowledge. Acad Med. 1997;72:641–3.

4. Hemmer PA, Hawkins R, Jackson JL, Pangaro L. Assessing how well three evaluation methods detect deficiencies in medical students' professionalism in two settings of an internal medicine clerkship. Acad Med. 2000;75:167–73.

5. Lavin B, Pangaro L. Internship ratings as a validity outcome measure for an evaluation process to identify inadequate clerkship performance. Acad Med. 1998;73:998–1002.

6. Noel G. A system for evaluating and counseling marginal students during clinical clerkships. J Med Educ. 1987;62:353–5.

7. Magarian GJ, Mazur DJ. Evaluation of students in medicine clerkships. Acad Med. 1990;65:341–5.

8. Evans CH. Faculty development in a changing academic environment. Acad Med. 1995;70:14–20.

9. Skeff KM, Stratos GA, Berman J, Bergen MR. Improving clinical teaching: evaluation of a national dissemination program. Arch Intern Med. 1992;152:1156–61.

10. Wilkerson L, Irby DM. Strategies for improving teaching practices: a comprehensive approach to faculty development. Acad Med. 1998;73:387–96.

11. Pangaro L, Gibson K, Russel W, Lucas C, Marple R. A prospective, randomized trial of a six-week ambulatory medicine rotation. Acad Med. 1995;70:537–41.

12. Lucas CA, Benedek D, Pangaro L. Learning climate and students' achievement in a medicine clerkship. Acad Med. 1993;60:811–2.

13. Sheets KJ, Schwenk TL. Faculty development for family medicine educators: an agenda for future activities. Teach Learn Med. 1990;2:141–8.

14. Hitchcock MA, Stritter FT, Bland CJ. Faculty development in the health professions; conclusions and recommendations. Med Teach. 1993;14:295–309.

15. Bland CJ, Stritter FT. Characteristics of effective family medicine faculty development programs. Fam Med. 1988;20:282–8.

16. Tonesk X, Buchanan RG. An AAMC pilot study by 10 medical schools of clinical evaluation of students. J Med Educ. 1987;62:707–18.

17. Clerkship Directors in Internal Medicine, Evaluation Task Force Survey, 1996. Clerkship Directors in Internal Medicine, Washington, DC, 1996. 〈http://www.im.org/cdim/5educate/eval/evtfsurvey96.htm〉. Accessed 11/17/99.

18. Pangaro L. Expectations of and for the medicine clerkship director. Am J Med. 1998;105:363–5.

19. Pangaro LN, Hemmer P, Gibson KF, Holmboe E. Formal evaluation sessions enhance the evaluation of professional demeanor. Paper presented at the 8th International Ottawa Conference on Medical Education and Assessment, Philadelphia, PA, July 1998.

© 2000 Association of American Medical Colleges

Login

Article Tools

Images

Share