Cole, Karan A. ScD; Barker, L Randol MD, ScM; Kolodner, Ken ScD; Williamson, Penelope ScD; Wright, Scott M. MD; Kern, David E. MD, MPH
Training in teaching skills is a critical step in the professional development of clinician–educators.1–5 Teaching skills programs have been shown to be effective,6–18 and considerable progress has been made in increasing their availability in the past 15 years. However, according to a recent national survey, only 39% of teaching hospitals have ongoing faculty development activities in teaching skills for their department of medicine faculty, and, on average, fewer than 50% of their faculty participate.19
Among teaching skills programs that include more common teaching approaches (lecture, discussion, distance learning, coaching, and skills practice),6–11,16,20 few report using reflection.6–8 Reflection is important for effective teaching,21 and has been fostered deliberately to help preceptors in the ambulatory setting to improve their teaching.22 It contributes to personal growth in medical faculty, which can enhance their teaching effectiveness.23 Reflection is defined as giving conscious attention to one’s interpretations of experience through awareness of thoughts, feelings, beliefs, and behaviors.21 Individuals can be stimulated to question their interpretations through experiential learning activities (daily events, role-play, or simulations that are also used to practice new skills), or by observation of others. Reflective activities, when combined with skills practice, may result in more durable change than would skill acquisition alone, because they can produce new insights and motivation for change. Individuals are more likely to engage in experiential or reflective learning if they identify their own needs, resources, and improvement, progress at their own pace (self-directed learning), and feel supported by others.24 Supportive facilitation involves a collaborative focus on the needs of the learner (learner-centeredness) and nurtures trust. A learning environment that simultaneously provides opportunities for challenge and is supportive further enhances the potential for change because it helps individuals practice new skills, self-appraise and disclose honestly, and solicit others’ opinions and feedback.25
Time is required to build the type of trusting relationships26 that support taking risks in self-discovery and learning. However, most teaching skills programs are short workshops or courses.19 Longer programs vary in frequency and duration (four weeks to two years).6–11,16,20
In this article, we describe an intensive (3.5 hours weekly), longitudinal (nine months) faculty development program in teaching skills that emphasizes a combination of processes that are essential for reflective learning, including: experience, reflection, self-direction, learner-centeredness, and relationship development. The evaluation of this program is one of the few that incorporates a nonparticipant comparison group,12,16,17 and to our knowledge it is the first to use multivariate regression modeling to assess whether personal characteristics affect the relationship between program participation and outcomes.
The Johns Hopkins Faculty Development Program for Clinician–Educators was established in 1987, underwent curricular and methodologic revisions in 1997, and continues to be implemented at present. It includes programs in teaching skills (TS) and curriculum development (CD), a facilitator- training program, a medical education fellowship, and a consultation service. It was designed by a team of general internists and behavioral scientists, who continue as program planners and facilitators.
The TS portion is designed to promote change in clinician-educator attitudes, beliefs, and behaviors towards learners, patients, and colleagues. Program goals are to enhance participants’ (1) teaching effectiveness, (2) professional effectiveness beyond teaching, (3) teaching enjoyment, and (4) learning effectiveness.
The program structure assists learning by providing ongoing opportunities for (1) building trust among participants and facilitators through relationship development and collaboration, and for (2) cycles of observing, practicing, and applying teaching skills in the classroom and in work settings, and reflecting upon these experiences. The program runs from September to June each year, for 3.5 hours weekly. Participants spend the majority of this time in stable working groups that include one or two facilitators and four to eight participants.
The program’s overall learning goals are for participants to experience, value, and improve skills in (1) facilitating self-directed learning and self-discovery, and (2) creating a collaborative, supportive, yet challenging learning environment. These are applied to specific skills in most content areas.
Seven content areas are addressed in individual sessions, or modules, which are five to seven weeks in duration. Each module builds upon and incorporates skills learned previously. For each, specific learning objectives are provided, together with targeted readings.
Content areas include:
* Adult learning concepts
* Time management
* Feedback provision and elicitation
* Small-group leadership and participation
* Physician–patient communication
* Precepting in clinical settings
* Leadership and management of work teams
A fundamental educational strategy for the program is the implementation of a parallel process whereby facilitators model the skills that are being learned by program participants, develop trust among themselves, and use experience, self-appraisal, and feedback to work collaboratively on their own knowledge, attitudes, and skills.
Educational methods used across content areas include information provision (readings, demonstration, presentations), experiential learning with reflection (role-play, simulated learners, real-life applications, and videotaping, all of which are learner centered and self-directed, and involve self-appraisal, feedback, problem solving, and discussion), and personal awareness sessions (sharing of meaningful experiences with emotional content).
Description of One Module
The feedback module illustrates how structural components, learning goals, parallel processes, and specific educational methods are applied to a content area to support learning (see Figure 1). Prior to this module, participants have been introduced to the concepts of adult learning. The first of the five sessions in this module begins with videotaping each participant while he or she is providing feedback to a standardized learner. A demonstration follows, from which participants identify those behaviors that are more or less effective. Discussion then bridges to a didactic summary of core concepts and skills that incorporate adult learning concepts. A syllabus names expected participant outcomes and contains selected readings, a detailed skills glossary, and scenarios for skills practice. Participants then observe their videotapes, self-assess, get feedback from the facilitator and each other, and identify learning strengths, needs, and resources for improving their skills throughout the module.
The first hour of most subsequent small-group sessions is reserved for the participants to share meaningful aspects of their professional and personal lives. These personal awareness sessions utilize a process described by Novack and colleagues.27 The emphasis of these unstructured sessions is to focus on feelings with the chance to observe and practice active listening and emotion-handling skills (not interrupting, validating and empathizing with feelings). The reflection and self-disclosure that occur often help participants to recognize and address attitudes, beliefs, and feelings that influence their capacity to use new skills and concepts. In addition, these sessions reduce the sense of isolation that so often accompanies the professional life of a clinician–educator.28
In the skills session that follows, participants describe recent or distant real-life experiences with feedback. Group members help their fellow participants identify accomplishments, learning needs, and ways to address them. Participants choose resources for their learning, such as discussion of readings, problem solving, solicitation of other members’ thinking and experience, skills practice (usually in role plays based upon self-selected situations), or observation of videotapes. They also direct their learning by asking for specific feedback about their performance. Personal reflection and learning may occur if they also explore related attitudes and feelings.
Participants identify their progress in providing effective feedback and new learning needs based upon their self-appraisal and others’ feedback. There is time for two or three participants to go through this process in a single session. Usually, more than one participant’s needs are met in learning activities that focus upon on the needs of one participant. A self-summary of significant learning completes each session and informs planning for the next week.
Among participants, providing “negative” feedback is universally identified as difficult. In addition to practicing skills, individuals are helped to recognize and explore their thoughts and feelings about providing negative feedback and to hear others’ perspectives. They also experience other members’ support and empathy. New insights and motivation for change occur.
Throughout the module, a parallel process is applied. Facilitators use effective feedback skills and support learner-centered participant discovery and learning while they lead the experiential and reflective activities. They keep the group focused on each individual’s stated needs, and help individuals build relationships and learn from each other. Participants observe these skills, and also have direct experience with the methods as learners. Following each session, co-facilitators of each small group use reflective learning by meeting to talk about their co-facilitation, their beliefs and feelings related to challenges encountered, and alternative ways to respond. They also plan for the next session.
The final feedback session includes videotaping each participant providing feedback to a standardized learner, reviewing these videotapes (as in the first session), summarizing participants’ learning, and planning for future applications. Group members explore their shared experience and the relative helpfulness of the learning and facilitation methods, and complete an evaluation instrument.
At the end of the module, all facilitators meet as a team, share new insights, and use the verbal and written input from the participants to plan for upcoming modules. A module on small-group leadership, participation, and process follows that offers opportunities for continued practice and application of feedback skills.
All 131 TS program participants and 131 program nonparticipants between 1988 and 1996 were recruited for the present study. Most of the participants were clinician–educators from Maryland, Washington, D.C., and Pennsylvania who had heard about the program through word of mouth or the Johns Hopkins Continuing Medical Education Office. Each program participant identified a program nonparticipant for the comparison group who was similar in age, gender, professional training, and professional role. Three program nonparticipants who were later participants, and 31 program participants who took CD before or during TS were excluded, leaving 100 eligible participants and 128 eligible nonparticipants. Although the program has continued, data collection was discontinued in 1997 because preliminary analyses indicated that statistical significance had been achieved, and that newer findings were mimicking prior results.
Design and Methodology
A pre–post study design with comparison group measured changes in participants’ and nonparticipants’ self-assessments of perceptions and skills, and a post-only study design without a comparison group measured the participants’ assessments of programmatic components. Program participants and nonparticipants were asked to help us evaluate the program, and told that information would be kept confidential and reported only in the aggregate. When data collection was begun, review of educational program evaluations was not required by our institutional review board (IRB). Therefore IRB review was not sought. Program participants completed questionnaires during protected time at the beginning and end of the program, and at the end of each module. Program nonparticipants responded to questionnaires mailed within three months of the start, and again when their identifying participants finished TS, or both TS and CD.
Outcome Variables, Instruments, and Measures
Individual outcome variables
These were measured at baseline and the end of the program by participant and nonparticipant self-assessment, using three-, four-, or five-point ordinal scales, including:
* Teaching effectiveness, measured as skill levels for determining learner needs, actively involving learners, lecturing, small-group teaching, one-on-one teaching, teaching in the presence of the patient, giving feedback, evaluating learners, and global teaching
* Professional effectiveness other than teaching, measured as skill levels for working in groups, time management, and administration
* Teaching enjoyment
* Learning effectiveness, measured as self-directed learning competence, a mean score of nine items assessing individual self-perceptions as a nondependent learner, the ability to relate collaboratively with peers and teachers as helpers, an understanding of the assumptions of self-directed learning, and the ability to utilize the various stages of the self-directed learning process.
Internal consistency reliability for this instrument developed by Malcolm Knowles29 was determined among participants within our study (Cronbach alpha = .86).
Program evaluation outcome variables
These were measured at the end of individual modules or the program for participants only, and included multi-item assessments of:
* Overall program quality
* Educational methods, facilitation, and learning environment
* Impact of participation on teaching and professional abilities.
Participants’ responses to the program were measured by a variety of four- or five-point ordinal and interval scales.
Continuous measures that were highly skewed or nonnormally distributed were collapsed into categorical variables on the basis of the distribution, conceptual meaning, and/or ease of description. Similarly, some categorical measures with sparse cells were collapsed into fewer categories. The comparability of the participant and nonparticipant groups at baseline for all measured personal characteristics and outcome variables was assessed using chi-square and t tests, as appropriate to the level of measurement.
Pre- post- change scores were created for outcome measures by subtracting preprogram from postprogram values. Comparisons within and between groups using change scores were chosen as the primary method to determine the success of the program.
To assess the relationship between program participation and outcome change scores, ordinary least squares regression was used. Unadjusted difference scores (betas) are the same as the difference between participants’ and nonparticipants’ change scores and were used to represent the effect of participation alone. Adjusted difference scores (betas) were used to represent the effect of participation when controlling for all other subject baseline characteristics. In the multivariate model where all variables are included, the partial r2 indicates that which can be explained by participation, after controlling for all covariates. The model R2 indicates that which can be explained by all of the variables together.
To simplify the presentation of end-of-module and end-of-program evaluation results, four-point scales were transformed to five-point scales. In addition, summary mean scores were created for those items that were conceptually similar and highly associated, and for identical items that were measured after each module. Differences among participant ratings of educational methods were assessed utilizing a paired t test, in order to help us identify participant preferences.
Pre–post program data were available for 98 out of 100 program participants (98%) and 112 out of 128 program nonparticipants (87.5%) who met the inclusion criteria. Of the two program participant nonresponders, one dropped the program and one completed only baseline questionnaires. Of the 16 program nonparticipant nonresponders, one refused participation in the study (6.3%), ten completed baseline measures only (62.5%), and five completed neither pre nor post measures (31.3%).
Baseline Sample Characteristics
Fourteen out of 16 baseline characteristics were distributed similarly in program participant and nonparticipant groups, indicating comparable groups at baseline. More nonparticipants than participants had assistant or associate professor faculty appointments and had prior training in teaching skills (p < .05). (See Table 1.)
Outcome Variables at Baseline
On average, program nonparticipants rated themselves significantly higher than participants at baseline for each of the specific teaching and professional skills (p < .001 for all except administration, p < .05), and for self-directed learning competence (p < .05). (See Table 2.)
Pre- and Postprogram Comparisons
Participant self-assessments of all 12 teaching and professional skills significantly increased (p < .001 for all except administration, p < .05), whereas nonparticipant self-assessments of only lecture skills significantly increased (p < .05), and those of time management skills significantly decreased (p < .05). Teaching enjoyment significantly decreased among program nonparticipants (p < .05), whereas it remained stable among participants. Participant self-assessments of self-directed learning competence (p < .001) significantly increased, whereas nonparticipant self-assessments were maintained. (See Table 2.)
Pre–post-change scores were significantly higher for participants than for nonparticipants for all dependent variables (p < .05 for administration skills and teaching enjoyment, p < .001 for all others). Unadjusted and adjusted difference scores obtained through multiple regression modeling indicate that program participation was associated with all pre–post outcome change scores except for administration skills, controlling for all baseline program participant and nonparticipant characteristics. (See Table 3.) Only adjusted difference scores appear on the table because they were nearly identical to the unadjusted ones.
Measures of overall program quality, educational methods, facilitation, learning environment, and the impact of program participation indicate that the program was well-received by participants. Ratings of experiential learning methods with reflection were significantly higher (p > .001) than those for information-provision methods or personal awareness sessions, with or without the inclusion of seven participants who rated personal awareness sessions as “not at all useful” to their learning. Information-provision ratings were significantly higher (p < .05) than were those for personal awareness sessions. However, this difference was no longer significant when those seven participants were dropped from the analysis. (See Table 4.)
The faculty development program in teaching skills that we have described here utilizes a theory-based educational model that incorporates strategies to promote change in participants’ knowledge, attitudes, skills, and the application of these to real-world settings. A systematic approach was used for program planning and evaluation.30
A noteworthy measure of this program’s effectiveness is that it brought participants’ appraisals of their teaching and professional skills up to the baseline level of nonparticipants’ self-appraisals of their teaching and professional skills; more of the nonparticipants had reported prior training in teaching skills. It is possible that the significantly higher nonparticipant baseline outcome ratings created a “ceiling effect,” making it more difficult for them to augment their appraisal of skills to the same extent as that reported by participants. While a ceiling effect cannot be ruled out, nonparticipants’ appraisals were not at the highest possible level of the rating scales, indicating that there was, in fact, room for potential improvement. An explanation for higher nonparticipant baseline appraisals and lack of self-assessed improvement is that more of them had prior training in teaching skills. However, participants and nonparticipants who had prior training in teaching skills had significantly higher baseline outcome ratings, and, irrespective of prior training level, participants demonstrated significantly more change than nonparticipants.
Also remarkable is the finding that teaching enjoyment was maintained in the participant group while it declined in the comparison group, suggesting that the program may buffer some of the challenges faced by clinician–educators. In Gerrity and colleagues,31 review of 20 studies on clinician–educator career satisfaction, two commonly expressed factors that counterbalance the satisfaction derived from teaching were found to be the tension between teaching and patient care, and doubts about teaching skill. In addition to training in teaching skills, they recommended various strategies to improve satisfaction: emphasizing intrinsic rewards, making teaching interesting and fun, and providing external rewards and recognition. We speculate that our program accomplishes some of these goals for our participants through skills development, affirmation of the importance and rewards of teaching, and group support.
We were encouraged that participants rated experiential learning methods with reflection significantly higher than they rated information provision. While clinical training in medicine has been predominantly experiential since the reformation in medical education during the early part of the 20th century,32 lecture and discussion remain the primary methods for classroom teaching. This suggests that role-play, simulations, videotaping, and self-appraisal may have been less familiar to our participants than were reading literature and listening to presentations. Perhaps more faculty development programs will consider using experiential methods with reflection, given evidence within this study and others7,12,13,15–18 that they are effective, and within this study that they are preferred by participants.
The lower rating of personal awareness sessions compared with information provision was no longer significant once seven participants who rated personal awareness sessions as “not at all useful” to their teaching were omitted from the analysis. However, the finding that personal awareness sessions were also rated significantly lower than experiential learning methods with reflection, with or without the inclusion of these seven participants, is intriguing. Both methods use reflection, but the content, amount of structure, and processes are different. The content of the reflection in personal awareness sessions is personal meaning and emotions. In contrast, professional behaviors and attitudes are the primary focus in skills sessions that use experiential methods with reflection. Participants are encouraged to listen and empathize in personal awareness sessions with little structure imposed. In skills sessions, structured problem solving is integrated with some attention to attitudes and feelings. We speculate that professional skills practice and reflection are more familiar, expected, predictable, and concrete than is dealing with personal meaning and emotion, resulting in a greater comfort level.
Notwithstanding their relatively lower rating, personal awareness sessions were rated, on average, as more than moderately useful to participants’ learning. We believe that they were important for promoting the collaborative and supportive relationships among our participants, enabling them to take risks,25 and to promote their personal awareness, which can enhance their teaching effectiveness.23 However, we may need to consider more structured approaches to personal awareness, such as facilitating modified Balint or Family of Origin groups,27 as alternatives to our less structured method.
Key aspects that distinguish this program from other longitudinal ones6–8,16 are its structure (weekly, 3.5-hour sessions for nine months in stable groups), and the combined emphasis on relationship, collaboration, learner self-direction, experience, and reflection. Although each of the other programs has some of these elements, none has all, nor does any cover the breadth of content area related to teaching, professional, and learning effectiveness. The seminar method of Skeff and colleagues16 intensively integrates provision of information and experiential learning twice weekly for two hours over four weeks. The cycles of experience and problem-solving repeat, but for a short time, and there is no attitudinal or emotional reflection. In spite of its 18-month duration, the one-hour session length and twice-monthly frequency of the program described by Elliot and colleagues7 requires that only one reflective method can be chosen each session, and the intervals between sessions are longer than ours. The program by Pololi et al.6 uses monthly meetings and provides for monthly cycles. However, the intensity may be greater than ours, due to the one- and three-day-long meetings. The program described by Hewson and colleagues8 is shorter (three months) and less frequent (twice-monthly meetings). The Skeff, Elliot, and Hewson programs only focus on teaching skills, and the Pololi program focuses primarily on broader professional development.
We do not know whether the particular frequency, length of sessions, and duration of our program increases the likelihood for the benefits of reflective learning to occur, in comparison with the other programs that use similar methods. However, theoretically, the particular combination of structure and methods of our program comprise a set of conditions for reflective learning. We believe that frequent, ongoing, intensive meetings among the same individuals, with an emphasis on relationships are important to achieving trust in learning groups. In addition, the length of each session allows for time to engage in both skills practice and multiple levels of reflection, which may have more impact than any of these alone. This structure also facilitates successive cycles of action and reflection-on-action described by Schon33 as reflective practice, and could promote ongoing use of this process by program participants.
We also do not know whether reflective learning outcomes (insights and reassessment of assumptions that, according to Mezirow,34 result in a shift in perspective, or transformational learning) occurred. However, we speculate that participants leave our program with an increased valuing of relationships and of reflection on attitudes, beliefs, and feelings, and an increased likelihood of connecting with learners, of supporting their self-directedness, and of helping them be critically reflective. These anticipated shifts in perspective may ultimately be more important than enhancement of specific skills. More study is needed to determine whether such learning is occurring, whether it results in a change of life perspectives or teaching perspectives in the learner, and whether these changes result in improved teaching.
A further speculation about outcomes is that increased capacity for relationship building and support for self-direction with learners may result in an institutional educational culture change, particularly through in-depth training of multiple faculty from the same institution. In contrast, the Stanford dissemination model35 emphasizes dissemination of specific skills content to multiple institutions by training faculty intensively for a month, and having them teach seven seminars at their own institutions. We have anecdotal evidence that the educational culture in our institution and the educational cultures in others with multiple graduates from our program have changed in a way that residents and faculty colleagues appreciate, and that some of our graduates have adapted our model at their own institutions for residents and faculty. More information is needed about the institutional impact of our program.
This program is resource intensive as a consequence of its structure and methods. Fortunately, the program has received federal grant support for all but one year since 1987. A rough estimate of the direct cost of training in the program is about $6,400 per participant, not including release time. In comparison, the estimated cost for the training of participants in a recent less intensive and less comprehensive national program was about $2,900 per participant.36 More information is needed about the costs (both direct and indirect) versus outcomes of various faculty development approaches to render judgments regarding the relative value of different models.
Strengths of our evaluation included its comprehensiveness, inclusion of a comparison group, multivariate modeling, and the high response rate. Although two of the programs previously reviewed had a comparison group,12,16,17 none included multivariate modeling methodology.
Several limitations should be considered. First, randomization of study recruits to participation or nonparticipation groups would have resulted in a comparable distribution of all measured and unmeasured sample characteristics at baseline. While selection bias cannot be ruled out, multivariate regression modeling enabled us to address noncomparability of prior training and academic appointment level between participants and nonparticipants at baseline. Second, randomization of participants to different combinations of educational methods would have allowed us to explore the efficacy of our reflective learning methods. Third, inclusion of reflective learning outcomes would have allowed us to explore whether shifts in insight, assumptions, and perspectives had occurred, and how these were related to our change outcomes. To address this limitation, we will be qualitatively exploring participants’ comments about ways in which they are different as the result of participation in the program, and personal awareness sessions in particular. Fourth, insufficient measures were included to better understand the relationship between personal characteristics and perceived educational method usefulness or value. Fifth, self-assessment and other subjective findings are reported, rather than those from objective measures. This limitation is tempered by the fact that objective verification of skills change has been demonstrated through independent ratings of videotaped pre–post feedback performance with simulated learners, which has been reported in abstract form.37 Sixth, short-term, end-of-program results, rather than long-term outcomes, were investigated. To address this limitation, a two- to 14-year postintervention follow-up of program participants and nonparticipants has been conducted and the data are currently being analyzed. Hopefully, this additional study will help to further define the long-term effectiveness of the skills training.
In summary, we have described a unique, intensive, longitudinal model for training in teaching skills that was designed to produce substantive changes in the assumptions, skills, and real-life performances of participants. It provides an alternative, based upon educational theory and current evidence, to other models for faculty development in teaching skills. Notwithstanding the limitations of our data, we have provided evidence of the feasibility, durability, and participant-perceived effectiveness of our model.
An earlier version of this article was presented as a poster at the Association of American Medical Colleges (AAMC) Conference on Research in Medical Education, Washington, D.C., November, 2003. This work was partially funded by the National Institutes of Health, Department of Health and Human Services, grant number D28 PE53014.