Amid concomitant social,1 economic,2 and political forces,3 academic medicine is confronted with a variety of challenges dealing with the education, assessment, and credentialing of physicians. For example, both the Institute of Medicine (IOM)4 and the Association of American Medical Colleges (AAMC)5 have called for significant reforms in the structure and function of U.S. medical education. In undergraduate education, requirements by the Liaison Committee on Medical Education (LCME) that accredited programs must explicitly define and assess clinical training objectives6 have spawned a reexamination of clerkship training, tracking, and assessment. Implementation of the USMLE Step 2 clinical skills exam7 has further raised the stakes by mandating students’ proficiency in basic clinical skills. At the graduate level, restrictions by the Accreditation Council for Graduate Medical Education on residents’ duty hours—combined with the requirement that educational objectives be directly linked to specified clinical competencies8—have placed an added responsibility on residency training programs.
As the growth of managed care and other social forces continue to erode physicians’ career satisfaction9 as well as patients’ trust in doctors and the health care system,10 pressures to safeguard the public health and preserve professional prestige have drawn attention to the medical education and training process. Consequently, although the above-stated reforms affect environments already laden with rules and regulations, their intent is, simply stated, to formally ensure the accountability of each medical school and residency training program in fulfilling their educational missions to train competent, qualified doctors.
Historically, medical education programs—including the program at the University of Kentucky College of Medicine (UKCOM)11,12—have responded to calls for reform with curricular innovations.13,14 However, many of these attempts have been incomplete, untimely, or incongruent with the interests of key stakeholders (see, for example, comments by Robins et al15). Further, programs that have managed substantive curricular change often have done so without an accompanying shift in educational philosophy or administrative governance—eventually regressing back to established practices for lack of a sustained educational system that could respond to continual change.16,17 It is toward this end—replacing the compartmentalization of courses with a linear, sequential educational structure—that the LCME has championed the need for centralized curricular oversight.
Ironically, because much educational innovation continues to be conceived at the course (rather than institutional) level, curtailing compartmentalization to nurture curricular continuity may have one unintended consequence: an added layer of bureaucracy18 that further impedes creative faculty members from improving, refining, or even revolutionizing the medical education and training process.19 The emergent challenge, then, is to ensure the structure, continuity, and quality of the educational product (typically an administrative responsibility) in a manner that accommodates refinement, restructuring, and innovation of the educational process (a largely faculty-initiated endeavor). That is, to establish centralized oversight without sacrificing the academic freedom of individual medical educators to think outside the box or, in Dr. Clayton Christensen’s words, initiate “disruptive innovations.”20
In this era of widespread reform, these complex and seemingly paradoxical tensions require creative approaches to curricular oversight. We wrote this article to (1) review the historical context and philosophical underpinnings of curricular governance, (2) propose a new paradigm informed by observations from industry, and (3) describe the administrative structure developed at the UKCOM. This structure, to use the language of the LCME (here found in its standard ED-33), seeks an “integrated institutional responsibility for the overall design, management, and evaluation of a coherent and coordinated curriculum” yet preserves faculty autonomy and creates an infrastructure that is continuously responsive to incremental change. This curricular governance structure, informed by production models that stress interdependence and quality assurance (QA),21 nurtures a synergistic relationship whereby institutional governance (administration) and educational innovators (faculty) not only peacefully coexist but also work together under a shared vision for how best to train physicians.22
Much of the current climate of increasing standardization, credentialing, and bureaucratization of medical education in the United States can be traced to post–World War II investments by the federal government in science research and education.23 Before the watershed Flexner report in the early 20th century, medical education was informal, unregulated, and of unknown quality. Structured more as a vocation or trade, preprofessional medical education relied heavily on the mentor–apprentice relationship whereby physicians-in-training learned their craft by observing and working with practicing artisans.24,25 Within this crude model, the entire governance, oversight, integration, and administration were vested in a single individual (the mentor), and any results, assessments, or outcomes focused on a single target (the apprentice). No significant attention to management structure was necessary, even as a burgeoning glut of proprietary medical programs before the turn of the 19th century led to fierce competition to bolster enrollments.26
With the advent of the academic health center, the educational mission became exponentially more complex.23 Responsibility for medical education was shared among multiple, individual faculty who owed their primary allegiance to autonomous academic departments—all of which supported institutional missions other than teaching. Today, although the expectation that individual faculty should excel in fulfilling institutions’ tripartite missions of research, service, and teaching27 is perhaps becoming less realistic,28 a remnant of the traditional, segmented model of curricular governance remains—namely, the continued ownership of educational turf by individual departments. Added to this is the continued department-based allocation of rewards and resources, which places faculty members in a conflict-of-loyalty dilemma between institution and department.18
In present-day undergraduate medical curricula, the educational process theoretically builds incrementally across a four-year training period, with the medical training arranged sequentially to promote linkages across disciplines, translation from basic to clinical science, and, less commonly, from clinical back to the basic sciences. Efforts to creatively restructure undergraduate curricula in ways that integrate instruction both horizontally (i.e., topically among courses and disciplines) and vertically (i.e., temporally across all four years of basic and clinical science) have shown promise; however, most programs remain closely wedded to the structure and timing of required licensure examinations. In this way, the compartmentalized and sequential process of education is analogous to the manufacture of a complex product—that is, one that requires the coordinated efforts of multiple faculty and staff, each dependent on the prior workmanship of peers.
Just as increasing complexities in health care delivery have placed a renewed emphasis on system efficiency, forces influencing medical education have similarly spawned a redefined attention to quality. Although little disagreement exists that programs should strive for quality curricula, operationally defining educational quality is neither intuitive nor straightforward.29 A curriculum needs to be dynamic in nature, which means able to respond effectively to evaluation and feedback, the changing expectations of stakeholders, changes in resources, and advances in knowledge, pedagogy, and assessment.30 Yet, without constant attention, curricular quality tends to diminish or drift over time.15 Given recent trends more broadly reconceptualizing quality as a linear, continual process rather than a discrete, episodic outcome, quality may be envisioned as anything that undergoes continuous improvement.19 It is toward this end that recent efforts to implement more dynamic, responsive curricular management structures have been directed, but in a manner that balances the need for creative educational innovation with mandatory centralized oversight.
Like medical training, the earliest production systems involved craftsmen who trained consecutively as apprentices, journeymen, and, eventually, master craftsmen. During that era, no real management structure existed apart from the actual work process. Automobile pioneer Henry Ford altered forever the course of production systems with the development of the assembly-line method of mass production. By breaking down the process of building a car into small, constituent parts, Ford staffed the assembly line with specialists for each discrete activity. Indeed, Ford’s knowledge of standardized mass production laid the groundwork for subsequent approaches—including many of the quality guidelines adopted and refined by Japanese manufacturers (e.g., kaizen, or continuous improvement) in an effort to mass produce goods “cheaply but well.”31
This separation of the manufacturing process into components necessitated development of a coordinated management structure. The structure that evolved was multilayered and vested power, authority, and decision making in upper administration—not assembly line workers. However, by distancing management of the manufacturing process so far from the actual work, a major problem arose. Because workers lacked the authority to temporarily suspend the manufacturing process, problems encountered on the factory floor had to be relayed up the administrative chain of command—a lengthy process that delayed action and allowed defective products to continue down the assembly line. In reality, the timely reporting of assembly line problems was neither encouraged nor rewarded,32 and a culture of blame often distorted the nature or cause of the problem.
Only later in the 20th century did quality become a serious preoccupation of businesses engaged in manufacturing. William Edwards Deming (1900–1993), whose insights into management structures remain widely cited today, is often credited with being the first ambassador of quality improvement. Through the combined use of careful measurement, sampling, statistics, and knowledge of human behavior, Deming formulated his now-famous 14 points of management, which included emphases on leadership, education, communication, and a radical restructuring of supervisory roles and functions.33 Deming thought that the manufacture of low-quality products was traceable to inefficient, poorly designed production systems, not to individual workers themselves.34 His perspective overlays nicely the contention by Stewart35 that complex organizations like medical schools can be composed of collections of brilliant people without being examples of collective brilliance (cited in Coyle36).
Many of the administrative structures now managing complex assembly processes evolved from two pure extremes. Independent structures are those in which individual units have a high degree of autonomy with relatively little management oversight. Often referred to as horizontal, such structures offer the advantage of significant local control and considerable individual buy-in and ownership. In these models, individual innovation and creativity are often fostered; however, integration and continuity across the entire process are often lost. Horizontal management most closely reflects the traditional structure of higher education and, more specifically, of academic medical centers.
Conversely, dependent structures afford individuals relatively little decision-making authority and are highly regulated and managed, usually at some distance from the actual production site. With centralized control, these vertical models are able to ensure a significant degree of continuity and uniformity within the constituent parts—a key advantage where quality control is crucial. A dysfunctional side effect, however, is the tendency for such structures to accumulate layers of bureaucracy. As the bureaucratic infrastructure expands, oversight becomes more distant and further removed from the production process, and maintaining a collective focus on the product becomes increasingly problematic. Perhaps more important, dependent structures can detract from any sense of ownership or pride in work at the functional levels, stifling individual innovation and creativity. This management model is typical of many government agencies and larger, established corporations.
In post–World War II Japan, and later in the United States, a hybrid model of management began to emerge within the manufacturing sector. Combining elements of both independent and dependent structures, this interdependent model revolutionized management oversight by strategically disassociating the decision-making function from the organizational accountability function. Whereas previous models vested accountability and decision-making authority either locally (horizontally) or centrally (vertically), this newer interdependent model separates these fundamental responsibilities by distributing decision-making authority horizontally (i.e., locally, to those who do the work) while structuring accountability vertically (i.e., centrally, to management). In this manner, continuity and oversight are maintained centrally while decision-making responsibility occurs locally—fostering a sense of ownership, innovation, and creativity.
The Lean model
Although the interdependent model seems straightforward, actual implementation is difficult. One of the earliest and most successful interdependent management structures is Toyota Motor Corporation’s Lean production model.
In a Lean model, problems are not viewed punitively but, rather, as learning opportunities that fuel continuous improvement of the overall product by implementing discrete, measurable modifications at various stages of the production process. The customer defines quality, workers are the most valued organizational resource, and the role of management is to help workers do their jobs better. In continually fine-tuning a Lean system, the emphasis is on the impact of overburden on the individual, not on the waste often attributed to worker inefficiency.37
Lean models organize workers into teams, each of which is given local responsibility for a particular phase of the production process. Guided by team leaders, team members deal with problems and issues as they arise within their authorized domains. The team is authorized to make improvements within their processes without approval from upper-level management, which, in turn, activates evaluation mechanisms. In this manner, local individuals make local changes—the results of which are carefully measured and shared with other teams.
Traditional manufacturing systems require management to police workers—a role many supervisors reluctantly embrace; however, within Lean systems, team members are largely self-managing. This participation and empowerment inherently builds in internal quality control by allowing individuals to develop a sense of process ownership. Team leaders serve as the primary interface between the internal production and management structures, thereby ensuring a smooth production process.21
Lean also emphasizes the internal customer model whereby individual employees view their downstream colleagues (those who will work on the next stage of assembly) as valued customers. By giving individual team members increased responsibility over the process, the work environment also becomes a learning environment—empowering workers to actively solve challenging problems in an ongoing, real-time fashion. This diffusion of responsibility and autonomy ultimately results in a net reduction in management effort, effectively flattening the traditional management pyramid.
Applications in medical education
As suggested earlier, continual monitoring and refining of the educational process are crucial to ensure that increasingly crowded curricula are as efficient as possible. Centralized institutional oversight and management of the curriculum, as mandated by the LCME, is a potentially viable means of achieving this, given improvements in (1) systematic quality control, (2) oversight of content, (3) ability to initiate reforms, (4) consistency of evaluation, and (5) systems to establish ongoing student and faculty input, responsibilities, and budgeting.3 However, to reiterate, although a system of central curricular governance provides a reasonable and timely vehicle for informed change, it must be implemented in such a way that academic autonomy is not sacrificed. (For a more thorough discussion of centralized versus departmental approaches to curricular governance, see Reynolds et al.18)
Interestingly, whereas LCME requirements for centralized oversight to ensure curricular continuity and integration (e.g., the requirement quoted earlier regarding the administrative structure at the UKCOM) may seem to warrant a vertical, dependent approach, similar calls for new approaches and reform from the AAMC and IOM seem to be better served by a more independent, horizontal system. How, then, to devise a Lean curricular governance structure that ensures both quality control via institutional accountability and educational innovation via localized decision-making authority?
Two essential elements are needed to divest institutional accountability (e.g., management) for a medical curriculum from the decision-making authority of individual course and clerkship directors (e.g., team leaders). First, there must be a robust, accurate, and comprehensive means of monitoring system performance. As with Toyota’s Lean system, suggested changes are encouraged, but the measurable impact of those changes (e.g., enhanced quality, reduced effort, etc.) must be clearly documented.
Second, the existing infrastructure must allow unrestricted and timely flow of these data to key decision-makers at both the institutional and individual levels. Again, this must also include the evaluation of implemented modifications which may impact other segments of the manufacturing (educational) process. Thus, the challenges to an interdependent model of curricular governance center on (1) creating a management system with horizontal decision-making authority and vertical accountability, and (2) supporting this management system with performance measures that facilitate timely, data-driven decisions. In the next section of this article, we discuss the innovative curricular governance structure at our school and how it dealt with those challenges.
The UKCOM Curricular Governance Structure
The foundational Lean innovation at UKCOM, begun in 2003, was to gradually transform the culture of curricular governance. We implemented a comprehensive QA plan that transfers decision making from administration to course directors while allowing accountability to remain vested centrally.1 Further, the plan mandates data collection and analysis in a fashion that facilitates discussion and problem solving both horizontally and vertically within the curriculum. The resulting communication, in turn, promotes team building and the intellectual exchange of ideas pertaining to all aspects of medical education.
Overall, the UKCOM curriculum emphasizes early clinical experiences, integration of the basic and clinical sciences, teaching in ambulatory clinic settings, and primary care. UKCOM’s curriculum uses many learning methods, including standardized patients, clinical training models, computer-assisted instruction, human–patient simulators, problem-based learning (PBL), small-group tutorials, and interactive lectures and laboratory exercises. The first two years of study introduce students to the technical language, principles, and methods of investigation in the primary disciplines of biomedical science. The third year provides students with broad exposure to the principal medical disciplines, and the fourth year is devoted to further required clinical experiences and student electives.
In the UKCOM structure, the dean—with input from an elected faculty council—still has ultimate responsibility for and control of the curriculum. However, operational oversight can be delegated to an appointed curriculum committee that is accountable by operating under a defined curriculum QA plan consisting of nine globally defined elements and a prescribed process for administering this plan (see Table 1). Within the accountability of the plan, decision-making authority has been delegated to the committee.
The curriculum committee then further delegates decision-making authority to individual course directors and faculty via individual course QA plans, which are entirely maintained by and the responsibility of course directors and department chairs. Accountability is retained by mandating that each individual plan contain the same nine core elements as the master plan. These elements are uniquely maintained for each course in the individual QA plans. At the end of each academic year, performance data collected by the faculty, the college, and various national agencies are collated into the UKCOM curricular QA matrix (Chart 1). Performance standards are set a priori by the curriculum committee. Courses that do not meet these standards must delineate plans for improvement or be audited to identify the root cause of the deficit. In the event of an audit, a separate committee makes recommendations for improvement; again, however, the ultimate responsibility remains with the respective course director. In this matrix, accountability, vested in the dean and curriculum committee, flows left to right, and decision making and the data it generates—vested in the faculty members and course directors—flow in the opposite direction. This process is continuous and iterative, because discrepancies in the quality indicators must regularly be acknowledged and a course of corrective action proposed. However, although the structure of the matrix is specified under the auspices of the curriculum committee (accountability), the corresponding data are generated entirely by the individual course and clerkship directors (decision-making authority).
For course directors, the matrix is intended to gauge how well teaching objectives are being accomplished. This follows the Lean principle whereby management guides workers through a series of questions that allow them to standardize their own work. It also allows tacit knowledge to become explicit, facilitating collective improvement efforts toward becoming a learning organization—a concept introduced by Peter Senge in The Fifth Discipline.22
Enabling informed changes to the educational process at both institutional and individual levels depends on the use of cutting-edge information technology to systematically collect, compile, and disseminate predefined performance data.38 Rather than reactively collecting data in an ad hoc, episodic fashion, performance indicators are continually monitored to provide more real-time feedback to governance entities and individual decision-makers. This ensures that evaluation and assessment represent stages of an ongoing, cyclical process,39 not a discrete, cross-sectional end point to be continually restarted and stopped (see Figure 1). With such a real-time emphasis, the curriculum can respond rapidly and intelligently to pressures for change.
To reiterate, the UKCOM QA program was developed to better delineate a process that promotes sustained, incremental, and informed curricular innovation in a manner that (1) validly reflects the structure and operation of the UKCOM curriculum, and (2) ensures members’ academic freedom by balancing centralized curricular oversight with individual creativity and initiative.
Support of this plan requires the Office of Medical Education to annually provide key stakeholders (e.g., curriculum committee, course directors, department chairs, etc.) with aggregate outcomes such as (1) USMLE board scores, (2) AAMC graduation questionnaire data, (3) standard course and faculty evaluation reports, (4) clinical performance exam results, and (5) residency program director evaluations. Course directors review and compile these and any other supplemental data in the aforementioned curricular quality matrix form (Chart 1), which are then submitted to the curriculum committee. On the basis of these data, each course director drafts a QA plan for the upcoming year that outlines and describes changes to be made to his or her respective course or clerkship. Both the formulation and implementation of each QA plan resides firmly with the course or clerkship director and that individuals’ designees.
During a typical meeting, the curriculum committee reviews two to three QA plans and may ask course and clerkship directors for additional information or clarification. A summary containing key findings—such as procedures/policies related to teaching or learning outcomes, innovative assessment methods, or targeted deficiencies—is retained for each report. Once all annual reports and plans have been reviewed, the curriculum committee submits to the faculty council and the dean an executive summary focusing on common themes gleaned from annual reports from the following committees: (1) professional code, (2) admissions, (3) student progress and promotion, (4) m1-m2 course directors, and (5) m3-m4 course directors. Triangulating these data, the curriculum committee makes recommendations to the dean for institution-wide opportunities for faculty development, curricular innovation, and improvements in facilities and equipment. The responses of the dean and faculty council, in turn, are disseminated back to course and clerkship directors and their respective chairs.
The curriculum committee has developed a policy and procedure describing the intent, process, and format of periodic audits (or focused reviews) of courses and clerkships—including clearly detailing the circumstances necessitating such action. Courses may be audited at the discretion of the curriculum committee if set performance criteria fall below a specified level. In these very earliest stages, an initial alarm is sounded when mean ratings on standardized student evaluations fall below adequate. Because student ratings are fraught with inherent limitations, they are used only as an initial indicator of potential course quality issues—which an audit may subsequently verify or refute. Currently, more precise measures of instructional and curricular quality (e.g., trained peer and nonpeer raters) are being established to better identify and pinpoint discrete areas of underperformance (or error) within the system. Whatever the outcome, the goal of the audit is constructive rather than punitive, and course directors are involved in all stages and aspects of the review.
In summary, the college-wide QA program features continuous, interdependent communication loops to and from the dean, faculty council, office of medical education, curriculum committee, directors from all required courses/clerkships, and department chairs (where appropriate). Mirroring the present UKCOM curricular structure and operation, the basic framework embeds individual, locally owned course-quality plans within a larger, centrally governed quality plan.
Examining how the education and training of physicians at the medical school level can be informed by complex manufacturing models, we have described a curricular QA system that balances the need to fulfill LCME requirements for centralized oversight with faculty members’ academic freedom to innovate. Guided by a Lean production systems approach, the curricular QA system being implemented at the UKCOM represents a philosophical departure from excessively vertical or horizontal systems of administrative oversight. That is, it entails a hybrid system of curricular governance that borrows respective strengths from each approach—assuring a means of centralized oversight while encouraging (indeed, requiring) the academic freedom of individual faculty to constantly make adjustments to improve the educational process.
In looking to manufacturing for guidance in curricular oversight, we are keenly sensitive to the reaction by some that the sterile simplicity of mass-produced consumables is—at a very visceral level—grossly incongruent with the humanistic education of medical professionals. Indeed, in his efforts to professionalize U.S. medical training at the dawn of the 20th century, famed physician–educator William Osler once chastised propriety schools for their “unrestricted manufacture of doctors” (emphasis added).41
Yet, as Evans and Fargason 41 have observed, “the goals of improving a product and improving a person are not necessarily mutually exclusive.” So, although “medical students are not customers”42—and training a physician is not literally akin to manufacturing a household appliance—the basic model, as a sequential process, shares key structural similarities in that each stage relies on successful completion of previous tasks, and success in both is dependent on learning.43 Using this analogy, a significant body of management literature can be brought to bear in envisioning curricular governance structures that provide effective oversight without stifling the creative energy and job satisfaction of individual faculty driving the educational process.
Drawing on this, the UKCOM curricular QA program has been designed to (1) establish a continuous (rather than episodic) monitoring of the curriculum, (2) assign responsibility of monitoring teaching methods and content to the course/clerkship directors, and (3) provide college-level evaluation data to course/clerkship directors in a timely, continual fashion. In addition, the QA model encourages (1) innovation by educators (such as course and clerkship directors) via a bottom-up management structure, (2) systems approaches to thinking, problem-solving, etc., (3) sharing empirically derived best practices in education, and (4) collaboration, research, and dissemination of innovations.
Too often, efforts at quality improvement do not occur on a continuous and regular basis but, rather, are undertaken episodically and in reaction to some event (e.g., LCME reaccreditation). Not surprisingly, then, the LCME encourages formative accreditation, whereby schools proactively apply and monitor educational standards throughout the accreditation interval.44 The curricular governance structure presented here is perhaps uniquely suited to accommodate this process, because it facilitates the efficient yet vigorous flow of quality and performance data to all stakeholders vested with the responsibility for the “overall design and management of a well-integrated curriculum” (LCME standard ED-33). This data flow, in turn, affords key entities a certain comfort level by providing the transparency to delegate decision-making responsibilities to faculty and course directors, laying the foundation for a truly interdependent governance structure.
Although calls for total quality approaches to medical education are not new,45 current LCME standards serve as confirmation of our decisions at UKCOM to move curricular management in this direction. For instance, LCME standard ED-1 states that medical school faculty must define the objectives of the educational program. Standard ED-1-A further asserts the necessity of establishing assessment procedures that can demonstrate student progress toward achieving “competencies that the profession and the public expect of a physician.” Accomplishing these linkages among broad institutional objectives, individual course learning objectives, and assessment processes is a daunting undertaking that, if managed effectively, can validate and reenergize teaching and undergraduate medical curricula.
A cornerstone of the Lean philosophy is the internal customer model that (1) views every individual as an important part of a larger whole, and (2) encourages transparency and awareness of all activities within the entire system. Philosophically, this is a radical departure from tradition in many academic medical centers, where department-based courses are created in isolation and only rarely viewed as related—often in times of strife. One goal of the UKCOM curricular QA program is to make all aspects of the curriculum clearly visible to all parties involved, so that individuals begin to envision each course as a vital part of the overall medical education process. Toward this end, the UKCOM curricular governance model is not yet truly Lean, although we continue to make significant reforms guided by that philosophy.
Initiating a QA approach to curricular revision has started the slow process of breaking down silos, although implementation has not been without obstacles. Initial faculty resistance to annual course reports was likely caused by fears of top-down, bureaucratic interference and potential encroachment on academic freedom. Yet, once convinced that this QA initiative involved the collaborative efforts of many curricular stakeholders, course and clerkship directors softened their resistance. All these directors have since successfully completed and submitted reports to the curriculum committee. Additional challenges include the major cultural shift, as mentioned, and the need to significantly enhance the collection and flow of curricular evaluation data—a process that remains ongoing.
To complete the QA loop that feeds performance data back to faculty educators, we have expanded our center for excellence in medical education seminars to showcase best practices gleaned from course reports. In addition, monthly course-director lunches are now provided for highlighting different courses and openly sharing frustrations and solutions to common problems. Perhaps most important, we are moving away from a course evaluation system that assesses the quality of one course relative to another, to a system focused more on progress or growth within a course or clerkship.
Thus far, the three-year implementation of our curricular QA program has shown encouraging results:
1. In their three required undergraduate courses, the department of anatomy and neurobiology has developed and implemented a system using peers and professional nonpeers to evaluate faculty lectures using objective structured clinical examination–type checklists. These checklists were subsequently adapted and extended to graduate students to improve their teaching—another example of learning organization behavior.
2. Illustrating the importance of internal customer-informed data, the department of microbiology, immunology, and molecular genetics—in an effort to bridge the gap between the basic science and clinical years—has initiated a formal mechanism to assess students’ preparedness in the area of infectious disease during their clinical clerkships.
3. As examples of best practices, the department of pathology and laboratory medicine has established and refined a comprehensive test item banking system, and the department of microbiology, immunology, and molecular genetics has adopted a formal quality-control process for reviewing examination questions.
4. The department of general internal medicine has initiated studies to assess the effectiveness of teaching methods used during inpatient rounds.
5. The department of pediatrics is investigating methods to document faculty’s clinical teaching effectiveness for the purposes of promotion, professional recognition, and faculty development.
6. One course with marginal student evaluations willingly accepted an internal audit, which resulted in several significant changes in the course—a clear change in the culture of curricular governance and educational evaluation.
7. A second course with problematic evaluations was voluntarily (without prompting from the curriculum committee or dean) merged with another to produce a new patient-centered medicine course—illustrating an internally driven quality innovation.
As medical schools respond to the challenges for better curricular oversight, integration, and continuity, it is important to resist the temptation to create overly complex, highly centralized educational bureaucracies. For example, the UKCOM has been successful in sustaining a recognized medical education research effort,46 much of which can be attributed to a horizontal, department-based educational management structure. Yet, this system too must now evolve to better address issues of integration, continuity, and consistent oversight—but in a manner that retains the independence requisite to nurture individual creativity and innovation.
Applying Lessons from Business to Medical Education
We wrote this article because we believe that a better model of QA (the Lean model) is available, and we wished to show how, with effort and creativity, this model can be applied to the medical education enterprise. This model, originally developed by Toyota Motor Manufacturing, has been actively studied and promoted through a sister entity—the University of Kentucky Center for Manufacturing Research. The presence of local expertise has enabled us to apply this model to the governance of our undergraduate curriculum in a strategic, deliberate manner under the assumption that medical education—as an iterative process with shared responsibility among multiple skilled professionals—can be approached in a manner analogous to complex manufacturing.
Drawing from lessons in business and manufacturing to inform educational processes is not entirely novel. For example, Christensen and associates47,48 have drawn parallels between established corporate structures and health care delivery (including the academic medical environment) and their inabilities to innovate in lieu of a stay-the-course mentality.
Further, in recommendations garnered from the Interdisciplinary Generalist Curriculum project, Wartman and colleagues49 contend that the “vitality of leadership, genuine collaboration, and responsive systems” are key components that closely parallel business and organizational models. In addition, Armstrong and associates43 have focused on the process management aspects of medical education, drawing distinct parallels with how comparable “demanding, high-tech, knowledge-intensive industries” coordinate and monitor individual system components and their relationships.
Yet, such approaches are not without caveats. That is, although medical education is analogous to manufacturing in that (1) graduates meet a certain level of competency (quality) to proceed to the next level of training, and (2) quality can be assessed via measured outcomes, this latter assumption often proves problematic. For example, as Pathman50 notes, desirable educational outcomes are sometimes falsely attributed to curricular quality—and resilient, self-correcting students are often able to overcome system deficiencies. Moreover, using selected outcomes (e.g., USMLE board scores) as valid indicators of curricular quality may also instill complacency to avoid fixing a system that isn’t broken. Finally, unlike the manufacture of durable goods (for example), a focus on educational process may shift attention away from equally relevant material costs associated with entry (admissions) or postproduction maintenance (GME, CME).
Realistically, implementation of any new curricular management structure—in this case, one which incorporates new and foreign elements from manufacturing approaches—will not be without struggle. This is especially true when the revisions represent a fundamental paradigm shift in how curricula are governed, implemented, and evaluated. As Clark and colleagues51 have straightforwardly attested, many university faculty members are simply “not accustomed to basing curricular decisions on assessment data.” Moreover, to place emphasis on issues of educational quality, rigorous curricular evaluation, and continuous data flow in an age of extramural research dollars and clinical revenues strikes many as an unlikely or questionable investment. It should be realized, however, that as medical training programs—like universities—continue to evolve into businesses of higher education, there will be corresponding pressures to perform in a competitive and organizationally efficient manner. These demands will likely extend to medical curricula as well, provided education remains core to “the family business.”52 Just as myriad forces have compelled academic medical centers to rigorously measure, monitor, and improve the quality of care delivered to patients,53 changes in medical education provide an opportunity for curricula to “become more aligned with health system goals and help prepare clinicians to practice in this new environment.”54 To accomplish this, however, innovations in curricula reform must include innovations in curricular governance that provide ample opportunities for faculty participation.27
In The Machine that Changed the World, Womack et al55 describe three eras of production. In the craftsman era, generalists work independently to produce customized products; in the mass era, specialists perform individual tasks to create standardized products under centralized control; and finally, in the Lean era, generalist teams of cross-trained specialists use standard procedures under decentralized control to produce customer-specific products. It is our contention that most current medical education falls somewhere between the first two eras—but it is poised to fully assume the third. We propose that, as a profession, we borrow from lessons learned in manufacturing and production and explore seriously the merits of moving directly forward toward a better model of QA through innovative, responsive curricular governance.