Empowerment Evaluation: A Collaborative Approach to Evaluating and Transforming a Medical School Curriculum : Academic Medicine

Secondary Logo

Journal Logo

Evaluating Curricula

Empowerment Evaluation: A Collaborative Approach to Evaluating and Transforming a Medical School Curriculum

Fetterman, David M. PhD; Deitz, Jennifer MA; Gesundheit, Neil MD, MPH

Author Information
Academic Medicine 85(5):p 813-820, May 2010. | DOI: 10.1097/ACM.0b013e3181d74269
  • Free

Abstract

Medical schools continually evolve their curricula to keep students abreast of advances in basic, translational, and clinical sciences. To provide feedback to educators, critical evaluation of the effectiveness of these curricular changes is necessary. This article describes a method of curriculum evaluation, called “empowerment evaluation,” that is new to medical education. It mirrors the increasingly collaborative culture of medical education and offers tools to enhance the faculty's teaching experience and students' learning environments. Empowerment evaluation provides a method for gathering, analyzing, and sharing data about a program and its outcomes and encourages faculty, students, and support personnel to actively participate in system changes. It assumes that the more closely stakeholders are involved in reflecting on evaluation findings, the more likely they are to take ownership of the results and to guide curricular decision making and reform. The steps of empowerment evaluation include collecting evaluation data, designating a “critical friend” to communicate areas of potential improvement, establishing a culture of evidence, encouraging a cycle of reflection and action, cultivating a community of learners, and developing reflective educational practitioners. This article illustrates how stakeholders used the principles of empowerment evaluation to facilitate yearly cycles of improvement at the Stanford University School of Medicine, which implemented a major curriculum reform in 2003–2004. The use of empowerment evaluation concepts and tools fostered greater institutional self-reflection, led to an evidence-based model of decision making, and expanded opportunities for students, faculty, and support staff to work collaboratively to improve and refine the medical school's curriculum.

Changing a medical school's curriculum is commonly an arduous process because faculty, students, and administrators may hold divergent opinions about both the goals for medical education and the best ways to achieve those goals.1,2 In addition, curriculum reformers may face stiff resistance if they challenge deeply entrenched interests, traditions, and institutional culture.3,4 Proposals for change may engender debate that can be rancorous and adversarial among competing faculty, such as basic scientists and clinicians.5 Yet, reaching a philosophical and practical consensus for change is critical, because a unified vision is needed to achieve broad acceptance, coordinated implementation, and subsequent refinement of a new curriculum.6 We (i.e., the authors, who formed the evaluation team) ask two important questions: (1) What are the best methods for achieving consensus about the optimal content of a medical curriculum? and (2) What is the best way to evaluate a curriculum to ensure that it will undergo sustainable cycles of review and improvement?

During the past 20 years, stimulated by the growth of basic science knowledge and by innovations in medical education, most U.S. medical schools have implemented significant changes in their undergraduate curricula.7 In a review of schools that had undergone successful curricular change, Bland and colleagues8 identified six predominant features: leadership, a cooperative climate, participation by organization members, favorable politics, human resource development, and effective evaluation. Similarly, Loeser and colleagues6 described five phases of curriculum change that were used at the University of California, San Francisco; the last phase was the implementation and evaluation of the new curriculum. Thus, effective evaluation is recognized as a key component of curricular reform, in part because most curricula are far from perfect when first implemented. Indeed, those who devise a new curriculum depend on and anticipate cycles of critique and revision to approximate the ideal.9–12

The Stanford University School of Medicine underwent a major change in its undergraduate medical education when a new curriculum was introduced in academic year 2003–2004. The curriculum continued to be refined yearly from 2004 through 2008. We found the principles of “empowerment evaluation”13–15—an innovative evaluation method that involves stakeholders in an egalitarian process of review, critique, and improvement—to be useful in evaluating the curriculum and in guiding change. The empowerment evaluation method includes a process for ongoing scrutiny of the effectiveness of the curriculum along with the promotion of a cooperative atmosphere. In this paper, we describe the key characteristics of empowerment evaluation and show how using its tools has helped faculty, students, administrators, and evaluators (as a group, referred to here as “stakeholders”) to revise, refine, and improve our medical school curriculum.

Empowerment Evaluation: The Approach and Its Tools

The approach

Empowerment evaluation, broadly defined, is an approach to gathering, analyzing, and using data about a program and its outcomes that actively involves key stakeholders in all aspects of the evaluation process. The approach provides a strategy by which and a structure within which stakeholders can empower themselves to engage in system changes; this process is based on the assumption that the more closely stakeholders are engaged in interpreting, discussing, and reflecting on evaluation findings in a collaborative and collegial setting, the more likely they are to take ownership of the results and to use evaluation to guide curricular decision making and reform.

The tools

Five tools are integral to the empowerment evaluation approach.15 We discuss each one below.

Developing a culture of evidence.

Emphasis is placed on rapidly disseminating formative feedback and data about the curriculum to help inform decision making. Reporting processes are established that will improve stakeholder access to evidence (evaluation data and findings) and that will provide regular opportunities for reviewing and discussing the evidence. Faculty are encouraged and supported in their efforts to use evaluation evidence as a tool to support or refute their positions in debates about curriculum. By making this practice a cultural norm within the institution, a culture of evidence is established.

Using a critical friend.

The empowerment evaluation method uses a professionally trained evaluator to help facilitate evaluation practices at an institution. However, the role the evaluator plays is not that of an outsider or an expert removed from curriculum development and implementation but, rather, that of a trusted colleague—a critical friend—who can coach stakeholders as they engage in self-evaluation, strategic planning, and curriculum implementation. This critical friend can help set the tone for discussions about evaluation results and findings, by modeling and informally articulating the philosophy behind the empowerment evaluation approach.16–20 Specifically, the critical friend should help to establish a positive learning climate in which the views of all stakeholders are respected, input from all parties is solicited, and the conversation is guided so as to encourage comments that are constructive and improvement oriented. Critical friends model a communication style that is inviting, unassuming, nonjudgmental, and supportive, so that stakeholders will feel comfortable in speaking openly about curricular issues and concerns. Critical friends should also help to remind members of the group about what they have in common, including shared curricular goals and institutional values.

Encouraging a cycle of reflection and action.

Structured opportunities allow stakeholders to reflect on evaluation data, set priorities, and develop action plans for revising the courses and curriculum. Once revisions are adopted and implemented, those too are monitored and evaluated. The cycle of inquiry and evaluation is thus ongoing and continuous.

Cultivating a community of learners.

Under the empowerment evaluation model, the assumption is that each stakeholder, regardless of his or her status within the institution, has a unique and important perspective on curricular issues that should be shared and valued. As stakeholders engage in dialogue and discussion, they learn from one another and, in the process, are able to broaden and deepen their understanding of issues affecting individual courses and, more broadly, the curriculum. They also gain greater insight into the steps involved or hurdles encountered in making improvements and revisions. Emphasizing learning and discovery in a collaborative and egalitarian environment helps to make discussions about evaluation findings more empowering and less threatening. Reaching a consensus cooperatively also helps to reduce the likelihood that mandates for reform will be imposed on faculty or students in a top-down or hierarchical fashion.

Developing reflective practitioners.

Building in structured time, assignments, and formal expectations for all stakeholders helps to immerse stakeholders in the evaluation process. Faculty, students, and administrators develop the habit of continually reflecting on their practices and programs, both individually and as part of a larger group or community. By establishing a habit of regular self-assessment, stakeholders become reflective practitioners.

These tools of empowerment evaluation are mutually reinforcing, and they frequently operate together. However, it may be useful, at first implementation of this approach, for those involved to think of the progression as taking place in stages. The stages build on each other, generating a momentum, which culminates in the development of a more refined and constructive evaluative eye (Figure 1).

F1-25
Figure 1:
A diagram highlighting the mutually reinforcing stages of empowerment evaluation at Stanford University School of Medicine (2004–2008), which culminated in a more refined evaluative perspective. The process of empowerment evaluation begins with collecting evidence, relies on a critical friend to facilitate the discussion about the data, and develops a culture of evidence in the process of encouraging stakeholders to use evidence to justify their positions. This reflective process is cyclical, as faculty and students refine their programs and review the results of their efforts, thus establishing cycles of reflection and action. A community of learners is a product of the process of engaging with the evidence on a regular basis. The process of empowerment evaluation and, specifically, the engagement in cycles of reflection and action contribute to a habit of mind in which individuals reflect on their own practices on a routine basis, becoming reflective practitioners.

How Empowerment Evaluation Comes Together

The theoretical framework and the tools integral to the empowerment evaluation approach, as described, provide the basis for establishing and routinizing a system of evaluation that can be applied across the medical school curriculum. At Stanford, the stakeholders initially focused on developing this evaluation system at the preclinical level.

First, data were assembled from midquarter focus groups, classroom observations, and end-of-quarter student questionnaires. Second, faculty members received these data in advance of a group meeting; at the meeting, faculty members discussed the positive and negative data from the evaluations, their own reflections, and the proposed next steps. A trained evaluator, who served as a critical friend, mediated the meeting, providing data and support but also being willing to highlight weaknesses in the course(s) and approach. This facilitated process helped to build a culture of evidence, in which it was the norm to use data to support one's position and inform decision making. Third, the faculty members engaged in an initial cycle of reflection and action by discussing ways in which they could collaborate between courses to enhance student learning—for example, by improving the logical sequencing of topics across the curriculum and by removing unintended redundancies. The evaluation findings and faculty responses served to stimulate discussion at curriculum committee meetings and in the faculty senate, which helped to cultivate a community of learners across the school. Fourth, faculty and student task forces worked together to complete the design and implementation of the revised courses and/or curriculum. Fifth, the process came full circle: The courses and curriculum were evaluated by both students and faculty to determine the effectiveness of the new educational interventions and curricular modifications. This last phase is the full cycle of reflection and action that is most characteristic of empowerment evaluation. Through this process, faculty, students, and administrators enhanced their goal of becoming reflective practitioners, who routinely assess teaching and learning in the school.

Application of Empowerment Evaluation Principles and Tools in the Evaluation of a Medical School Curriculum

The challenge of transitioning students and faculty through a major restructuring of the curriculum at Stanford lent itself to a parallel reassessment of the existing evaluation methods. The stakeholders, led by the evaluation team, began to think more explicitly about the values that were implicit in our practices and to explore ways to address key weaknesses in our approach.

Prior approach to evaluation and the areas needing change

In the past, the stakeholders and the Office of Medical Education (OME) did not explicitly articulate a set of values or goals to frame our approach to evaluation. Implicitly, we were guided by a tradition within the institution that each course was a discrete entity and that responsibility for that course lay primarily with the individual course director. Thus, dissemination of evaluation tended to be unidirectional, with the OME collecting student ratings and comments and disseminating the results but leaving the decision making about course changes to the individual directors. OME would intervene only under extreme circumstances, such as when a course or clerkship was floundering, as measured by consistently poor student ratings or substandard student performance on exams.

Other weaknesses in our prior approach were a lack of formal structures and processes to encourage and support stakeholder engagement in the evaluation process. Within OME, we did not have dedicated staff members in charge of developing the evaluation program or managing and reporting evaluation data. Consequently, although the OME staff were vigilant about collecting evaluations from students at the end of each course, in some cases, data were “warehoused,” and evaluation results were not made available to course or clerkship directors for months or even years afterward. In the interim, findings lost their relevance, crucial windows for making curricular or budgetary decisions were missed, and weaknesses in some courses persisted for years without being recognized. As part of our adoption of empowerment evaluation principles, we established a Division of Evaluation at the medical school to address these weaknesses and help meet accreditation standards.24

In identifying and instituting a new model of evaluation, we wanted an approach that would mirror the changes taking place in medical education nationwide.2,7,21,22 Our new medical school curriculum emphasized an integrated and applied approach to teaching and learning, and, in this new environment, faculty needed to work more collaboratively to develop the curriculum. Developing a single block of lectures in the integrated-organ-systems course on the heart or lung required input from and involvement of faculty in the departments of histology, pathology, microbiology, physiology, and pharmacology. Similarly, the stakeholders now expected more participatory involvement on the part of both teachers and learners as active learning exercises were developed in which preceptors would serve not as content experts but as facilitators, helping teams of students diagnose and develop treatment plans for patient problems.23 It was our hope that the new model of evaluation would reflect these changing institutional values and practices and, thus, would bring collaboration and participation to the forefront. The culture of evaluation would be consonant with the new culture of teaching and learning.

Current approach to evaluation

The new Division of Evaluation staff worked to implement a structured, coherent program that would apply empowerment evaluation principles and tools in evaluating the effectiveness of the curriculum. Empowerment evaluation tools served to address a wide range of issues affecting the educational program and curriculum at Stanford, including issues surrounding courses, clerkships, scholarly concentrations, advising, and faculty instruction. The approach also helped us to address crosscutting curricular issues such as scheduling, standardization of syllabi, and clinical skills training. Finally, the stakeholders used empowerment evaluation to identify broader programmatic issues, such as the need for changes in staffing and the resource allocation needs for integrated courses.

Initially, however, the primary focus was on evaluating and monitoring the preclinical curriculum, because this aspect of our program was undergoing the most radical transformation. Shifting to an applied, integrated curriculum meant changes in the faculty's roles and responsibilities. Course content and hours were adjusted, and some faculty had great concern that eliminating content in some areas in order to expand content in others might jeopardize students' knowledge base and preparedness as researchers and clinicians. We therefore felt that it was important to establish regular, structured opportunities for students and faculty to engage in an ongoing dialogue about the curriculum, to reflect on strengths and weaknesses, and to collaboratively seek solutions when problems were identified.

Applying the empowerment evaluation model to the preclinical curriculum

We applied several familiar evaluation tools to our review of the preclinical curriculum, including focus groups and surveys. However, we applied them by using the new perspective of empowerment evaluation. We conducted focus groups midway through each quarter, which allowed us to gauge student responses to the new curriculum while courses were still in progress. Key findings were immediately disseminated in brief memoranda to course directors and administrators. This process allowed numerous directors to make midcourse corrections based on the preliminary feedback. The changes ranged from simple, quick fixes, such as ensuring that there were sufficient numbers of scalpel handles for students in anatomy, to more substantive changes, such as providing students with examples of integrative questions that might better prepare them for their final examinations.

Applying the empowerment model at Stanford also required improvements in the data-collection and -reporting processes. Working collaboratively with course directors, the evaluation staff developed tailored, end-of-quarter online course evaluation forms to collect evidence about individual courses, the curriculum, and student life. Faculty received results four to six weeks after course completion. In the past, stakeholders would typically receive an extensive report of raw and unfiltered student comments, often totaling 30 pages or more; the survey questions originated almost exclusively from OME. In contrast, with empowerment evaluation, we distilled and analyzed qualitative and quantitative data, using both standard questions across the curriculum and questions rooted in specific concerns of course directors. Student ratings and comments were summarized into brief, written reports that provided a “dashboard view” of each course's key strengths and weaknesses. Particular care was taken to highlight the successful aspects of the course and to acknowledge faculty who had been singled out by students for providing excellent instruction. Comments about areas meriting attention were shared in a diplomatic and constructive manner, particularly relevant or insightful student suggestions were highlighted, and each report concluded with a brief set of recommendations. By emphasizing the positive steps that could be taken to improve a course, evaluators sought to minimize the disillusionment that can sometimes accompany poor student ratings and to provide encouragement and support for faculty as they worked to revise and refine courses, curriculum, and the quality of instruction.

In the past, scheduled meetings with course directors were rare or sporadic. Applying the empowerment evaluation model also required instituting an ongoing practice of holding course director meetings within six weeks of the conclusion of each quarter to review and discuss evaluation findings. Meetings were facilitated by the evaluation director, who served as a critical friend, coaching faculty through the process of reviewing results and setting priorities for revisions. During these meetings, faculty directors could discuss the students' evaluations of individual courses and their educational experience during the quarter as a whole. Faculty members then shared their perspectives, bringing their knowledge and insights about the course to bear on the discussion, including additional data regarding student performance on exams (pass/fail rates) and reflections on gaps and redundancies in course content, either within individual courses or in the curriculum as a whole. These meetings provided structured opportunities for what Bohm25 called “dialogic engagement,” or exchanges among course directors, students, the curriculum committee, and the empowerment evaluators.

As access to evaluation data improved and the process of sharing evaluation data and results became less threatening, faculty became more comfortable engaging in dialogue about evaluation findings. In using the data as evidence to bolster their arguments for making changes to the curriculum, they began to build a culture of evidence. Active faculty engagement in the evaluation process made it meaningful and fostered a sense of ownership concerning the evaluation data. Through this process, we began to forge a community of learners.

Working closely with many individual course directors, students, and administrators and adopting an inclusive and pluralistic perspective toward data collection, the evaluation team was in a unique position to identify patterns across the curriculum. A variety of data sources (e.g., faculty and student surveys and comments, focus-group findings, and exam scores) were highlighted in quarterly and annual reports to the curriculum committee, with emphasis placed on highlighting crosscutting curricular issues. Examples of curriculum changes that emerged from these dialogues included adjusting the hour allotments for courses and laboratories and moving psychiatry content into the neurosciences block in order to strengthen integration. Once revisions to courses and the curriculum were implemented, the cycle came full circle as the stakeholders reviewed evaluation data the following year to assess the impact that the planned changes had had on the curriculum. Thus, a continuous and systematic cycle of reflection and action across the curriculum became institutionalized. As a result, course and clerkship directors, deans, and OME administrators now frequently make requests for reports, data, and consulting assistance from the Division of Evaluation to help inform their strategic planning.

Collaborative efforts at all levels contributed to providing a comprehensive self-assessment of the curriculum and placed stakeholders back in the driver's seat—providing them with forums and opportunities to become leading voices in setting priorities. Through continuous engagement in the evaluation process and daily immersion in evaluation discourse, stakeholders learned to mirror the process in their daily practice, thus becoming reflective practitioners.

Measuring the Effectiveness of Empowerment Evaluation

Internal metrics

Applying the empowerment evaluation model to curriculum evaluation at Stanford has contributed to improvements in course and clerkship ratings. In comparing evaluation results before and after stakeholders began using this approach, we found that the average student ratings for the required courses (18 courses) improved significantly (P = .04; Student's one-sample t test), as shown in Figure 2. Similarly, ratings for most of the required clerkships remained steady or improved. When dialogue around evaluation findings led to the development of targeted strategies for addressing key weaknesses in courses, clerkships, or the overall curriculum, student ratings showed a marked improvement. Three examples that stand out in this regard are the “Cells to Tissues” course, the obstetrics–gynecology and surgery clerkships, and our cross-curricular approach to clinical skills training.

F2-25
Figure 2:
Comparison of course ratings of “excellent” or “very good” at Stanford University School of Medicine before and after the empowerment evaluation intervention, and the percentage difference between the ratings from 2006–2007 and those from 2004–2005. Overall, the average student ratings for the required courses improved significantly (P = .04, Student's t test). The courses are represented by letters to preserve anonymity.

Ratings in the Cells to Tissues course had been declining steadily until a dialogue around evaluation results brought to light two key weaknesses in course design—insufficient laboratory time and lack of integration between lectures. Stakeholders—now engaged both as a community of learners and as reflective practitioners—reviewed the findings and worked collaboratively to seek a solution. Course directors committed to work together to align lecture content, and the curriculum committee voted to expand the time allotted for laboratories. As a result, course ratings rose significantly the following year and have remained consistently high ever since.

Similarly, the obstetrics–gynecology and surgery clerkships had been receiving comparatively low student ratings until evaluators were invited to serve as critical friends. The evaluators helped to coach the clerkship directors through a strategic planning process that included reviewing evaluation findings, isolating areas of weakness, and implementing interventions. Some of the interventions were improving orientation, clarifying goals and expectations for students, emphasizing professionalism, and developing training tutorials on providing students with feedback. As a result, student ratings of “very good/excellent” in the obstetrics–gynecology and surgery clerkships increased by 26% and 16%, respectively.

Applying empowerment evaluation principles and tools to the medical school curriculum contributed to a broadening of the scope of institutional reflection. For example, discussion of clinical skills performance scores on statewide exams and student evaluations of clinical skills instruction led to the formation of a Physical Examination Task Force that was charged with strengthening clinical skills training across the curriculum. Improvements instituted since the formation of this task force have included the development of a new high-stakes standardized patient exam, the creation of a formal remediation plan for low scorers on the exam, and the establishment of a clinical skills mentoring and training program.

External metrics

Finally, we explored the use of several external metrics to measure the effectiveness of the new Stanford curriculum as it was being refined by empowerment evaluation. The median United States Medical Licensing Examination (USMLE) Step 1 score at Stanford for the three years before the introduction of the new curriculum and evaluation method was 230, whereas the median score rose to 237 by three years after the introduction. The failure rates of students in USMLE Step 1 and USMLE Step 2 Clinical Knowledge, both before and after the curricular change, were each under 2% (national rates: 6% and 5%, respectively). For clinical skills development, no data were available from 2000–2003 because USMLE Step 2 Clinical Skills was introduced for the first time in 2004–2005. However, in academic years 2005–2006, 2006–2007, and 2007–2008, the failure rate on this exam at Stanford was 1%, whereas the national failure rate was 3%. We also examined student performance during the first postgraduate year of training. Approximately 80% of our graduating students match at competitive academic residencies. In their first year of residency, our graduates' performance generally met or exceeded expectations in skills and knowledge domains, as reported by residency program directors. Although these metrics lack the precision and certainty of a controlled experiment, the aggregate measures indicate that student education was kept at a high level or was enhanced by the introduction of the new curriculum and the simultaneous use of empowerment evaluation to refine it.

Challenges and Barriers

Adopting an empowerment evaluation approach at a medical school is rewarding but also challenging. Faculty members often have demanding schedules and little frame of reference for participating in collaborative feedback. Time devoted to reviewing the curriculum and modifying courses may compete with time devoted to research and publication—activities that commonly bring greater institutional rewards. Some faculty, accustomed to reviewing their evaluation results in private, initially had concerns about discussing course and curriculum strengths and weaknesses in a group setting. For empowerment evaluation to work effectively, members of both the administration and the faculty needed to view their participation in teaching and a candid critique of the school's curriculum as priorities equal to research, publications, and grants.

For these reasons, securing appropriate representation and participation across faculty groups can be challenging. Course directors at Stanford were strongly encouraged to attend empowerment evaluation meetings, and they generally did so; however, faculty members with smaller roles in the curriculum needed to be more actively recruited. And some faculty members, despite every effort to accommodate meeting times to their schedules, were virtually impossible to attract.

Empowerment evaluation requires an investment of resources. Institutions must be committed to supporting the evaluation staff with formal training to facilitate the process, and faculty time must be protected to allow their participation in meetings about and in discussions of the curriculum. In the absence of an evaluation unit and a regular meeting schedule, the collaborative process can be compromised. If data are not collected, analyzed, disseminated, discussed, and reviewed in a timely manner, rigor may not be maintained.

Thomas Edison once observed, “Opportunity is missed by most people, because it is dressed in overalls and looks like work.” There is no question that participatory models of evaluations do take work. Extracting oneself from the everyday demands of teaching, research, and patient care requires extra energy and commitment on the part of each stakeholder. But those stakeholders who are able to engage in a sustained effort to deliberate over data, achieve consensus about specific decisions, and act vigorously to implement change will find that the process brings with it new opportunities to enhance the educational experience of their students.

Receptivity of Stakeholders to Empowerment Evaluation

The acceptance of the empowerment evaluation approach by medical school faculties has not been formally evaluated; however, at Stanford we are encouraged by the positive feedback we have received from course and clerkship directors, who seem to appreciate that the empowerment evaluation approach seeks their active involvement and, in turn, provides them with concrete feedback in a timely manner. We provide three quotes from Stanford faculty members as evidence that empowerment evaluation has initially been well received.

A department chair said,

We are grateful [to the empowerment evaluation approach] as we continue to improve our departmental teaching programs! In addition to the superb efforts [of our clerkship directors], our entire faculty and our residents and fellows are now much more committed to and involved with medical student education.

The director of clerkship education said,

Several clerkships have responded to this [evaluation] feedback by requesting individualized coaching [from the evaluation group, which has] made several site visits to assist clerkship directors in revising their orientations, syllabi, and student performance evaluations. End-of-clerkship evaluations demonstrate improved ratings as a result of feedback and coaching; narrative performance evaluations have improved in terms of completeness and level of detail.

The director of a medical student clinical clerkship said, “All of the Stanford sites want to help improve their teaching. The differences [that empowerment evaluation] made at our neighboring hospitals were visible and clearly effective.”

Summing Up

This discussion has highlighted the principles and tools of empowerment evaluation as applied to medical education. A nationally recognized school of medicine provided both a case study and concrete examples of the implementation of this approach, to help illustrate the critical processes associated with empowerment evaluation. Applying these concepts and tools fostered greater institutional self-reflection, led to an evidence-based model of decision making, and expanded opportunities for students, faculty, and support staff to work collaboratively to improve and refine a medical school curriculum. We believe that empowerment evaluation provides an approach more in keeping with the evolving culture of medical education nationwide and that it offers valuable tools for building a better teaching experience for faculty and an enhanced learning environment for students.

Acknowledgments:

The authors thank the faculty, students, and administrators at the Stanford University School of Medicine who were involved in curriculum design for joining us in a community of learners dedicated to the improvement of medical education.

Funding/Support:

None.

Other disclosures:

None.

Ethical approval:

Not applicable.

Disclaimers:

The authors had full access to all of the data in the study and take responsibility for the integrity of the data and the accuracy of the data analysis.

References

1 Mennin SP, Krackov SK. Reflections on relevance, resistance, and reform in medical education. Acad Med. 1998;73(suppl):S60–S64.
2 Ludmerer KM. The internal challenges to medical education. Trans Am Clin Climatol Assoc. 2003;114:241–250.
3 Regan-Smith MG. “Reform without change”: Update, 1998. Acad Med. 1998;73:505–507.
4 Bloom SW. The medical school as a social organization: The sources of resistance to change. Med Educ. 1989;23:228–241.
5 Cantor JC, Cohen AB, Barker DC, Shuster AL, Reynolds RC. Medical educators' views on medical education reform. JAMA. 1991;265:1002–1006.
6 Loeser H, O'Sullivan P, Irby DM. Leadership lessons from curricular change at the University of California, San Francisco, School of Medicine. Acad Med. 2007;82:324–330.
7 Cooke M, Irby DM, Sullivan W, Ludmerer KM. American medical education 100 years after the Flexner report. N Engl J Med. 2006;355:1339–1344.
8 Bland CJ, Starnaman S, Wersal L, Moorehead-Rosenberg L, Zonia S, Henry R. Curricular change in medical schools: How to succeed. Acad Med. 2000;75:575–594.
9 Hollander H, Loeser H, Irby D. An anticipatory quality improvement process for curricular reform. Acad Med. 2002;77:930.
10 Patel VL, Yoskowitz NA, Arocha JF. Towards effective evaluation and reform in medical education: A cognitive and learning sciences perspective. Adv Health Sci Educ Theory Pract. 2009;14:791–812.
11 Malik R, Bordman R, Regehr G, Freeman R. Continuous quality improvement and community-based faculty development through an innovative site visit program at one institution. Acad Med. 2007;82:465–468.
12 Morrison J. ABC of learning and teaching in medicine: Evaluation. BMJ. 2003;326:385–387.
13 Fetterman D, Kaftarian S, Wandersman A. Empowerment Evaluation: Knowledge and Tools for Self-Assessment and Accountability. Thousand Oaks, Calif: Sage; 1995.
14 Fetterman DM. Foundations of empowerment evaluation. Thousand Oaks, Calif: Sage; 2001.
15 Fetterman DM, Wandersman A. Empowerment evaluation principles in practice. New York, NY: Guilford Publications; 2005.
16 Argyris C, Schön D. Organizational Learning II: Theory, Method and Practice. Reading, Mass: Addison Wesley; 1996.
17 Reason P. Handbook of Action Research: Participative Inquiry and Practice. 2nd ed. London, UK: Sage Publications; 2008.
18 Rogoff B, Matusov E, White C. Models of Teaching and Learning: Participation in a Community of Learners. London, UK: Blackwell; 1998.
19 Schon D. The Reflective Practitioner. San Francisco, Calif: Jossey-Bass; 1988.
20 Senge P, Cambron-McCabe N, Lucas T, Smith B, Dutton J, Kleiner A. Schools That Learn. A Fifth Discipline Fieldbook for Educators, Parents, and Everyone Who Cares About Education. New York, NY: Doubleday; 2000.
21 Ad Hoc Committee of Deans, Association of American Medical Colleges. Educating Doctors to Provide High Quality Medical Care: A Vision for Medical Education in the United States. Available at: http://services.aamc.org/publications/showfile.cfm?file=version27.pdf. Accessed January 28, 2010.
22 Cottingham AH, Suchman AL, Litzelman DK, et al. Enhancing the informal curriculum of a medical school: A case study in organizational culture change. J Gen Intern Med. 2008;23:715–722.
23 Lynch DC, Swing SR, Horowitz SD, Holt K, Messer JV. Assessing practice-based learning and improvement. Teach Learn Med. 2004;16:85–92.
24 Van Zanten M, Norcini JJ, Boulet JR, Simon F. Overview of accreditation of undergraduate medical education programmes worldwide. Med Educ. 2008;42:930–937.
25 Bohm D. On Dialogue. London, UK: Routledge; 1996.
© 2010 Association of American Medical Colleges