Secondary Logo

Journal Logo

A Criterion-based, Peer Review Process for Assessing the Scholarship of Educational Leadership

RICHARDS, BOYD F.; MORAN, BETTY JEANNE; FRIEDLAND, JOAN A.; KIRKLAND, REBECCA T.; SEARLE, NANCY S.; COBURN, MICHAEL

Section Editor(s): Strayhorn, Gregory MD, PhD

PAPERS: CAREERS IN MEDICINE—“YOU'VE GOT TO BE CAREFUL IF YOU DON'T KNOW WHERE YOU'RE GOING BECAUSE YOU MIGHT NOT GET THERE”
Free

Correspondence: Boyd F. Richards, PhD, Baylor College of Medicine, One Baylor Plaza, Room M301, Houston, TX 77030; e-mail: 〈boydr@bcm.tmc.edu〉.

In their call to action, “Scholarship in Teaching: An Imperative for the 21st Century,” Fincher et al.1 identify the challenge medical schools face in implementing an “infrastructure needed to foster, assess, and reward scholarship in teaching and other activities supportive of learning.” Speaking on behalf of the Group on Educational Affairs project on scholarship and building on Boyer's seminal work,2 these authors challenge “medical educators who accept responsibility for fostering scholarship in teaching” to “provide mechanisms to support the creation, critical review, and dissemination of educational scholars' works,” and then to evaluate those mechanisms to “understand the reasons some methods are more effective than others.”

Consistent with this challenge, in 2001, Baylor College of Medicine (BCM) implemented a program to recognize faculty educational contributions, which relies on peer review, is criterion-based, uses established criteria for assessing quality of scholarship,3 and provides structured formats to facilitate preparation and review of miniportfolios in four distinct categories of educational endeavors: teaching and evaluation; educational leadership; development of enduring educational materials; and educational research.

The goal of this program is to enable consideration of faculty educational contributions as scholarship so that the outcome of these considerations can contribute positively to the promotions and tenure process. For such a program to work over time, it must gain a reputation of merit.4 To those involved, it must look and feel comparable to the traditional peer-review process used in selecting grants for funding or manuscripts for publication.

In this paper, we briefly describe BCM's faculty education recognition program and how it was implemented in 2001 for the category of educational leadership. We then report outcome and review panel member survey data that address the following questions:

  1. How long do reviews take to complete?
  2. Do reviewer panel members (i.e., “reviewers”) make consistent ratings?
  3. Do reviewers perceive that the components of the program work as intended?
  4. Do reviewers perceive the overall process to be credible?
  5. What changes do reviewers recommend?
Back to Top | Article Outline

Method

General Description of Recognition Program

Beginning in 1997, BCM's Committee for Educator Development has worked to enhance the school's pro-teaching environment. In 2001, after several years of grass-roots planning, research of the literature,5,6,7 and consideration of recognition programs at other institutions,8 the committee launched the program with strong support from school leadership, including the chair of the promotions and tenure committee. With the receipt of a generous endowment, the program was named The Fulbright and Jaworski L. L. P. Faculty Excellence Award.9 The award recognizes faculty with a wall plaque, presented at an annual ceremony, and a five-year membership in the school's Academy of Distinguished Educators.

According to design guidelines,9 faculty are invited semiannually to submit one or more mini-portfolios, each containing evidence of scholarship specific to the category of interest. The mini-portfolio consists of a one-page personal statement and one-to-two-page structured summary of accomplishments. An applicant submits the mini-portfolio with supporting documentation in appendices (e.g., letters from peers, learner evaluation data) and a curriculum vitae (CV).

Back to Top | Article Outline

The Review Process

Modeled after the National Institutes of Health's method for reviewing research proposals, a representative panel of respected faculty evaluates the mini-portfolios. The panel includes a balance of MDs, PhDs, and educational specialists, all invited to contribute to the panel because of their status and experience in education. The panel rates the mini-portfolios for quality, quantity, and breadth. Reviewers use the six criteria for scholarship proposed by Glassick, Huber, and Maeroff3 to assess the criterion of quality: (1) clear goals, (2) adequate preparation, (3) appropriate methods, (4) significant results, (5) effective presentation, and (6) reflective critique.

Reviewers rely on the personal statement to provide the information necessary to rate the applicant's goals, preparation, presentation of “lessons learned,” and reflective critique. Reviewers assess the criteria of quantity and breadth by considering the number, scope, magnitude, variety, and intellectual demand of the contributions identified in the structured summary. They rate the quality of methods used in making those contributions, as well as the quality of the results and/or outcomes those methods produced.

For each mini-portfolio, a primary reviewer and a secondary reviewer, not of the applicant's department, are assigned to read the supporting documentation and CV and lead the discussion at the selection meeting. All reviewers perform a general review of all mini-portfolios. During the discussion of a mini-portfolio, the primary and secondary reviewers share their ratings for each criterion using specified point ranges. The point range for quality is 1 to 50; for quantity, 1 to 40; and for breadth, 1 to 10. The quality point range is subdivided using Glassick's criteria3: goals, 1 to 5; preparation, presentation, and self critique, 1 to 10; and methods and results, 1 to 35.

After considering the ratings of the primary and secondary reviewers and discussing any areas of disagreement, panelists individually assign an overall score to each mini-portfolio on a 100-point scale. Those mini-portfolios receiving an average score of 80 or more points from two thirds of the reviewers receive the award.

Reviewers base their ratings on a criterion-based standard—not on comparison with other mini-portfolios. The standard is defined by three or four detailed prototypes carefully prepared and validated for each award category through an iterative consensus-building process within a faculty task force. These prototypes are based on real faculty and illustrate the types of accomplishments faculty can include in their mini-portfolio for a given category. The prototypes, published in advance on the Web,9 illustrate the structured format and communicate the amount of accomplishments needed to achieve the award.

After studying information about the award published on the Internet,9 reviewers attend a one-hour orientation meeting where they receive detailed instructions and complete a practice review. They are asked to assume that the prototypes would receive scores between 85 and 95. Thus, the cutoff of 80 points allows for a small margin to reduce the possibility of “false negatives.”

Back to Top | Article Outline

Program Implementation for Educational Leadership Category

Review of the educational leadership category occurred in the fall of 2001, approximately seven months after review of the teaching and evaluation category, the first category to be reviewed when the award program was implemented. Although the structure of the educational leadership mini-portfolio was specifically tailored to maximize faculty presentation of their accomplishments in this category,9 the process of preparing and reviewing mini-portfolios was the same for the two categories.

The structured summary for the educational leadership mini-portfolio begins with highlights from the personal statement, which are designed to draw reviewers' attention to the applicants' goals, preparation, reflective critique, and commitment to sharing their “lessons learned” about effective educational leadership. This is followed by a matrix in which rows contain distinct leadership activities and columns present information about the attributes of quantity and quality of those activities. Applicants are encouraged to (1) organize their descriptions of the quantity of their leadership activities according to the formal positions they have held, (2) identify their role in each activity, (3) describe briefly the methods they have used to achieve intended outcomes, (4) summarize evidence of quality of those methods, and (5) include more details and supporting documentation in appendices. The structured summary ends with a brief explanation of how the presented accomplishments satisfy the criteria of breadth (e.g., variety in types of leadership positions, specific accomplishments, and/or audiences served). Evidence of quality could include letters from peers commenting on perceived quality of leadership and its impact, course evaluations, and outcomes achieved (i.e., learners' course performances).

Seventeen faculty from nine departments (13 MDs and four non-MDs) submitted educational leadership mini-portfolios. The review panel consisted of 14 individuals from 11 different departments within BCM and from two nearby institutions. Four were basic scientists, three were educational specialists, and seven were physicians. Eight of the 14 panelists had also reviewed teaching and evaluation mini-portfolios. The three-hour review panel meeting occurred 13 days after the panelists received the mini-portfolios for review.

Back to Top | Article Outline

Outcome Measures

The outcome measures of interest are: (1) The similarity of panelists' ratings for individual mini-portfolios. This was assessed by the pattern in standard deviations of numeric ratings across panelists for individual mini-portfolios, and by the agreement of those ratings according to the award cutoff. (2) Panelists' perceptions of the award program and review process. This was assessed by surveys delivered via e-mail four weeks after the selection meeting. The surveys asked reviewers to estimate how much time they had spent completing their reviews and to answer a series of open-ended questions. After several friendly reminders via e-mail and phone, 100% of the panelists responded. Three authors (BFR, NSS, and BJM) independently reviewed the questionnaire data and then collaboratively organized responses according to the questions of interest in this paper.

Back to Top | Article Outline

Results

We summarize outcome data and the 13 participating reviewers' survey responses according to the five questions of interest in this paper. (One of the original 14 reviewers was unable to attend the selection meeting due to unexpected patient care conflicts and was not included in the survey.)

  1. How long do reviews take to complete? On average, reviewers reported spending 1.38 hours completing their primary reviews, 1.14 hours completing their secondary reviews, and .19 hours on each of the non-primary or secondary reviews. (Standard deviations for these average time estimates were .51, .63, and .19, respectively.)
  2. Do reviewers make consistent ratings? The average of reviewers' ratings for 15 of the 17 educational leadership mini-portfolios exceeded the cutoff of 80 points. The mean (and range) of the average ratings across the 17 mini-portfolios was 88 (76 to 95); for the standard deviations, these values were 3.3 (.7 to 7.3).
  3. For all but one of the 15 mini-portfolios with average ratings above the cutoff, 100% of reviewers had assigned ratings above the cutoff. For the two mini-portfolios with average ratings below the cutoff, 54% and 92% of reviewers had assigned ratings below the cutoff, respectively.
  4. Do reviewers perceive that the components of the program work as intended? In general, reviewers responded positively to questions on the survey about the mechanics or logistics of the review process itself, especially considering it was “a first pass.” One reviewer wrote, “I have been impressed with the process and the commitment on the part of those responsible for defining and promoting the award.”
  5. No reviewer questioned or challenged the use of peer review. One reviewer commented, “Overall, a very interesting and useful process. I think the size of the review panel was about right and the composition, especially with outside members, was excellent…. I was impressed at the care and thoughtfulness members brought to the considerations.”
    In terms of being criterion-based, most reviewers appeared to find the process of comparing mini-portfolios with the four educational leadership prototypes (i.e., the standard) challenging but feasible. They found support for their ratings of the mini-portfolios with reference to the prototypes. Overall, the reviewers' comments indicate that the rating system for primary and secondary reviewers works.
    Ten of the 13 reviewers found quality the most difficult criterion to evaluate (e.g., “It's pretty straightforward to assess the quantity and breadth, but more difficult to evaluate the quality”; “By the time I had settled satisfactorily on the quality mark, the other criteria were much easier to process.”). Two reviewers specifically mentioned quantity as the most difficult criterion because of the “great variety in expression.” One reviewer said that all criteria were difficult “because of variations” in form and substance of the mini-portfolios.
    In terms of evaluating quality, no reviewer specifically objected to using the established criteria of educational scholarship3 to make required ratings of educational leadership, nor did they claim that the criteria overcome all of the challenges inherent in the process. One reviewer wrote, “I thought the criteria were well-explained and the examples were very helpful. Perhaps a little more discussion about the comparison of the [prototypes] to the mini-portfolios and the appropriate assignment of points would be useful.” Another comments, “The criteria and assigned point values seemed appropriate.”
    Only one reviewer, who was participating on the panel for the first time, indirectly criticized the structured format of the mini-portfolio: “I found the format of the mini-portfolio to be imposed… I think it would be much more revealing to ask more open questions, review CVs and letters.” Most reviewers were either more positive or did not comment. The general sentiment was that the “recommended structure worked well to organize an often disparate set of materials for interpretation and evaluation” and that if all mini-portfolios more consistently followed the structured summary format, the “panelists would be more consistent in evaluating them.”
  6. Do reviewers perceive the overall process to be credible? When questioned about the overall review process and how it compares to other review activities faculty members have been involved in, two panelists reported never having participated in other peer-review activities, four reported that the process was less rigorous (e.g., “It is not as rigorous as the NIH grant review. [The] panel worked through most issues although sometimes it's not too clear as to where to set the line.”). The remaining seven reviewers viewed the process as comparable to other systems they had experienced. One wrote, “I thought the review process compared very favorably to other models. Again, assessment of the quality of educational leadership may be troublesome, but I think it was done as fairly and thoroughly as possible.” Another added, “In general we work similar [sic] to some grant review panels in which I have participated. The key difference is our personal knowledge of the applicants, which makes it a bit difficult to have objectivity.”
  7. What changes do reviewers recommend? When asked to recommend changes to the program, one reviewer did not answer the question, seven evaluators suggested no change, and six offered specific recommendations. For example, “require clarity of quantitatively supportive evidence for the largest roles claimed when there is a collaborative effort”; “increase the number of points given to the criteria of breadth”; “expand the number of prototypes”; “require [the structured] format for the portfolio”; and provide “more time to review the portfolios before the meeting.”
  8. Four reviewers thought it is necessary to clarify whether and when it is appropriate for reviewers and/or observers to share information about the applicants based on personal experience instead of relying exclusively on information provided in the mini-portfolio. For example, one reviewer wrote of a need for “clear guidelines on whether to factor in what people said about candidates versus what was in the mini-portfolio.”
Back to Top | Article Outline

Discussion

Based on our experience and the results reported above, we conclude that the Fulbright and Jaworksi L. L. P. Faculty Excellence Award makes significant progress toward meeting the challenge put forth by Fincher et al.1 to establish and evaluate an infrastructure that supports the “the creation, critical review, and dissemination of educational scholars' work.”

While far from perfect, the process of criterion-based peer review, using standard criteria to evaluate quality and using a structured format appears to have adequate merit relative to other peer-review processes and should serve as a useful model for other institutions. The program's merit is revealed in the consistency of the reviewers' ratings for the same mini-portfolio and their variance in ratings across mini-portfolios. Additionally, of the 13 reviewers involved in the review of educational leadership mini-portfolios, only one indicated substantial concern about the basic format or process of the award. While the others recognized areas for improvements, they endorsed the core concepts and procedures. As one reviewer concluded, “It's a new process. We're learning.”

As members of the committee charged with design and implementation of the award, we are pleased with our progress to date. We anticipated many of the concerns the reviewers expressed and are gratified that they were not more substantive. We believe that we can substantially resolve them in future review cycles. In these future cycles, we plan to replicate our study of outcome measures reported in this paper for the other three categories. We will study sources and impact of bias on reviewers' ratings, such as the match between reviewers' and applicants' departments. We also plan to study perceptions of applicants about the review process and how those perceptions are influenced by the applicants' success in receiving awards.

In conclusion, our resolve to make this program a success has been reinforced by our experience to date. We are increasingly confident that the following prediction made by the chair of our promotions and tenure committee will be realized: “Given the central role of peer review in the process of selecting recipients of the Fulbright and Jaworski L. L. P. Faculty Excellence Award, receipt of the award will inform the promotions process in a positive manner. It will provide a third-party, ‘disinterested’ evaluation of educational skills for the education portfolio similar to what peer review by NIH and merit review study sections provide in the area of research.”

Back to Top | Article Outline

References

1. Fincher RME, Simpson DE, Mennin SP, et al. Scholarship in teaching: an imperative for the 21st century. Acad Med. 2000;75:887–94.
2. Boyer EL. Scholarship Reconsidered: Priorities of the Professorate. The Carnegie Foundation for the Advancement of Teaching. Princeton, NJ: Princeton University Press, 1990.
3. Glassick CE, Huber MT, Maeroff GI. Scholarship Assessed. San Francisco, CA: Jossey—Bass, 1997.
4. Carusetta E. Evaluating teaching through teaching awards. New Directions for Teaching and Learning. 2001;88:31–40.
5. Kreber C. Designing teaching portfolios based on a formal model of the scholarship of teaching. In: Lieberman D, Wehlburg (eds). To Improve the Academy. The Professional and Organizational Development Network in Higher Education. Vol. 19. Boston, MA: Anker, 2001:285–305.
6. Knapper C, Wright WA. Using portfolios to document good teaching: premises, purposes, practices. New Directions for Teaching and Learning. 2001;88:19–29.
7. Regan-Smith MG. Teaching portfolios: documenting teaching. J Cancer Educ. 1998;13:191–3.
8. Rubeck RF, Bitterling AC, Witzke DB, O'Connor WN. Medical faculty master teacher awards. Presentation at Innovations in Medical Education. San Francisco, CA: AAMC Group on Educational Affairs, June 8–11, 1996.
9. Committee for Educator Development, Baylor College of Medicine, 〈http://www.bcm.tmc.edu/fac-ed/awards/distinguished/〉. Accessed 6/28/02.
© 2002 by the Association of American Medical Colleges