Share this article on:

Assessment for Systems Learning: A Holistic Assessment Framework to Support Decision Making Across the Medical Education Continuum

Bowe, Constance M. MD; Armstrong, Elizabeth PhD

doi: 10.1097/ACM.0000000000001321
Perspectives

Viewing health care from a systems perspective—that is, “a collection of different things which, working together, produce a result not achievable by the things alone”—raises awareness of the complex interrelationships involved in meeting society’s goals for accessible, cost-effective, high-quality health care. This perspective also emphasizes the far-reaching consequences of changes in one sector of a system on other components’ performance. Medical education promotes this holistic view of health care in its curricula and competency requirements for graduation at the undergraduate and graduate training levels. But how completely does medical education apply a systems lens to itself?

The continuum of medical training has undergone a series of changes that have moved it more closely to a systems organizational model. Competency assessment criteria have been expanded and more explicitly defined for learners at all levels of training. Outcomes data, in multiple domains, are monitored by external reviewers for program accreditation. However, translating increasing amounts of individual outcomes into actionable intelligence for decision making poses a formidable information management challenge.

Assessment in systems is designed to impart a “big picture” of overall system performance through the synthesis, analysis, and interpretation of outcomes data to provide actionable information for continuous systems improvement, innovation, and long-term planning. A systems-based framework is presented for use across the medical education continuum to facilitate timely improvements in individual curriculum components, continuous improvement in overall program performance, and program decision making on changes required to better address society’s health care needs.

C.M. Bowe is codirector, Systems Approach to Assessment in Health Professions Education, Harvard Macy Institute, senior consultant, Partners Health Care International, and professor emeritus, Clinical Neurology, University of California, Davis, School of Medicine, Sacramento, California.

E. Armstrong is director, Harvard Macy Institute, and clinical professor, Pediatrics, Harvard Medical School, Boston, Massachusetts.

Funding/Support: None reported.

Other disclosures: None reported.

Ethical approval: Reported as not applicable.

Correspondence should be addressed to Constance M. Bowe, Harvard Macy Institute, 100 Cambridge St., 20th Floor, Boston, MA 02114; telephone: (617) 535-6483; e-mail: cbowe115@gmail.com.

The value of systems theory is well recognized as it is applied in a variety of fields ranging from engineering, economics, and business to education, social services, and health care. Health policy makers and educators have concluded that a systems perspective is essential for practice in the 21st century1–3 and have advocated for its inclusion in curricula to prepare future health care providers.4–8 Medical education has gradually introduced a succession of changes that have bridged previous boundaries between the individual components of the training continuum to better align its goals with society’s health care needs. Parallel domains of the competencies that learners are required to demonstrate for advancement through the training continuum have been unified and expanded to reflect the essential capabilities ultimately needed in practice.8,9 Accreditation bodies have progressively required more comprehensive documentation of outcomes for approval of both undergraduate and graduate programs. Collectively, these changes have strengthened a holistic view of medical education as a system more closely interfacing with the health care delivery system and society’s health care needs.

Despite these advances toward a systems perspective on medical education, advocates of outcomes-based medical education remain concerned by the encountered resistance to its acceptance.10 Accreditation bodies have increasingly encouraged greater internal attention and response to program outcome deficiencies at periodic intervals between external evaluations.11 Moreover, in the absence of evidence of its value,12 the cost:benefit ratio of the increasing effort expended on assessment is questioned by learners and faculty who are often underwhelmed by the tangible improvements they perceive. Such concerns are not limited to medical education but are shared by thought leaders in higher education in general who have described a “culture of accreditation”13 that has evolved, leading to “assessment fatigue.”11

Monitoring of individual program outcomes for accreditation does not necessarily advance a program’s understanding of itself as a system. From a systems perspective, isolated data are not real information until they are considered within the context in which they are gathered.14,15 Nor do outcomes for individual system components provide actionable information until they are integrated and analyzed to infer their meaning for the system as a whole. Only in this way can intelligence on system performance be provided for evidence-based decision making. Translating growing amounts of assessment data into meaningful and actionable intelligence poses an information management challenge that can only be addressed by actively engaging diverse system members in the interpretation of their program’s data.16,17

The medical education continuum, viewed from a systems perspective, suggests that reliance on individual component outcomes is insufficient to provide a holistic picture of program performance. More in-depth analyses of combinations of assessment data are needed to appreciate component interrelationships, identify factors contributing to unsatisfactory outcomes, and provide evidence to guide quality improvement efforts and long-range program planning. Systems principles, relevant to medical education, in general, and to assessment, specifically, are reviewed and applied to develop a framework for the medical education continuum. The framework we propose in this article addresses three major goals of assessment in systems: continuous monitoring of individual curriculum component performance while a planned educational activity (EA) is in progress (such as a course, curriculum module, or clinical rotation) to facilitate timely corrections; intermittent analyses of current system performance to inform continuous quality improvement (CQI) and innovation efforts; and periodic evaluation of longitudinal system performance to determine its readiness for systemic changes needed to better prepare learners to meet the evolving needs of the health care system.

Back to Top | Article Outline

Systems Principles and Practices Relevant to Medical Education

In essence, a system is “a collection of different things which, working together, produce a result not achievable by the things alone.”18 The “different things” or functional components, including stakeholders, are discrete units of individuals or groups that perform specific tasks that result in the system’s desired product or service. The better result achieved relies on the planned coordination of system components clarified in the system design. Effective and efficient communication pathways and feedback loops, critical to the system’s function, are clearly specified. Therefore, a primary systems principle states that the outcome of a system represents more than the simple sum of the outcomes of its individual parts.

Back to Top | Article Outline

Complex adaptive systems

A specialized branch of systems science focuses on systems occurring in nature and organizations that are highly dependent on human behaviors, or complex adaptive systems (CASs).19,20 These systems are much less predictable than the mechanized systems described in manufacturing. They are characterized by dynamic interactions and variable interrelationships between and among their components that reverberate throughout the system.20 Consequently, considerable variability is observed in human-dependent CASs over time in both outcomes and system performance. Alterations in any one component can have far-reaching consequences.

Back to Top | Article Outline

CAS design and management

The valuable adaptability of a CAS to rapidly respond and adjust to alterations in its environment underlies the effectiveness and efficiency reported in well-functioning systems. However, it also renders CASs particularly resistant to tight centralized control. Medical educators and managers involved in mandated, top-down curricula reforms can well appreciate the resistance, active and passive, that can emerge. The challenge in CAS design is to provide sufficient organizational structure to keep all stakeholders on task without limiting component flexibility, initiative, and commitment to overall system performance improvement.

To achieve this delicate balance between centralized coordination and advantageous autonomy, system designs explicitly define system goals and the specifications of the desired final outcome or service to be provided, generically referred to as the “final product”; work flow to be followed to achieve the desired outcome; stakeholder roles, responsibilities, and accountability; and, most important, the “structured context” in which work is to be organized, coordinated, integrated, and supported to add value to the final product.21 The latter includes planned processes, procedures, communication pathways, and supports designed into the system to link the activities of individual components and ensure that they work together effectively and efficiently. The basic components of an undergraduate or graduate medical education program, viewed from the perspective of a system design, are presented in Table 1.

Table 1

Table 1

Systems scientists acknowledge the management challenges posed by CASs. Admittedly, administration in medical education is further complicated by the positioning of our programs within institutions simultaneously serving multiple missions—patient care, research, and education. We do not underestimate the tensions arising from this situation, but our present discussion is focused on the potential benefits of systems principles for the advancement of our educational goals.

Back to Top | Article Outline

Systems thinking

A central construct of systems theory is systems thinking, the ability to recognize the influence of component complexity, dynamic interrelationships, and situational context on system outcomes.22 System performance is dependent on this skill at all levels of the organization.23,24 Valerdi and Rouse25 report that systems thinking skills vary significantly among different stakeholders in an organization, but they can be learned. Toward that end, systems science has adopted a number of strategies, derived from research in the fields of cognitive psychology, social science, organizational behavior, and adult learning theory, which are familiar to medical educators. Visual graphics, including variants of concept maps, flow diagrams, and organizational networks, are used in problem-solving discussions to illustrate functional interrelationships, critical communication pathways and feedback loops, and fundamental interfaces between and among components where system breakdowns most commonly occur.26–30

Given the value that systems place on efficiency, it may be surprising to learn that systems invest considerable time and effort to advance systems thinking at all stakeholder levels. However, systems’ reliance on their stakeholders’ abilities to make rapid and appropriate corrections when performance problems arise rests on their stakeholders’ recognition of a problem and the use of systems thinking in making necessary adjustments. Inclusive stakeholder discussions and dialogue around how their system works, or could be improved, are revealing about discrepancies in members’ understanding of the system31 and are useful in identifying the critical factors contributing to system problems and dysfunction.20,26,29 Most important, time spent in these deliberations serves an educational purpose for system members and advances organizational learning.26,32 We stress that organizational learning is not an attempt to develop group think—quite the contrary. Rather, through the exchange and discussion of diverse perspectives and experiences, a knowledgeable workforce is created that can actively participate in decision making and system improvement.20,23,26,31,32

Similar stakeholder-inclusive discussions often occur in curriculum reform planning and the self-study process for accreditation and are associated with improved learning and curriculum efficacy. One could postulate that some of the initial benefits observed after a curriculum reform stem, in part, from the stakeholder exchanges that take place. Disparate opinions on program goals, problematic areas, and effective solutions are made explicit. Communication between and among stakeholder groups is strengthened, the interfaces between program components are reinforced, and stakeholders’ awareness of interrelationships within their system is advanced. However, in the absence of continued maintenance of such exchanges, the educational gains can gradually erode.

Back to Top | Article Outline

Accountability, oversight, and decision making

In systems theory, responsibility for outcomes and the quality of the final product is shared and partially dependent on compliance with planned procedures and processes as well as the system’s provision of critical supports. Hence, desired or disappointing outcomes are not attributed to any individual component alone. Systems intentionally encourage greater shared responsibility for outcomes than is typical at many academic institutions.16 Stakeholders are expected to address problems arising from noncompliance with planned processes and procedures as they arise. But then, together with managers, further analyses are performed to determine what system improvements are needed to prevent a recurrence.16,21,30 Similarly, discussions about needed CQI efforts or the introduction of innovations solicit diverse stakeholder opinions to minimize unintended consequences on other components of the system.

A system’s ability to respond to major changes in its external environment—new regulatory standards or end-user needs—is posited on its members’ accurate understanding of the system’s current strengths and limitations in order to determine the true benefits and costs of any major system changes. Such deliberations initially include disparate stakeholders’ viewpoints26 and move forward, not to necessarily achieve a consensus, but to at least identify changes that the “system members can live with.”29

Back to Top | Article Outline

Assessment in systems

Given the high value systems place on evidence-based decision making and knowledgeable stakeholders participating in it, the overarching purpose of assessment in systems is to provide an accurate “big picture” of system performance. Actionable information is sought to optimize systems’ operations at three distinctive organizational levels for decision making: internal correction within individual component performance, quality improvement in system performance, and system readiness to respond to changes in the system’s external environment.

Back to Top | Article Outline

A View of Assessment in Medical Education From a Systems Perspective

Assessment of individual components’ performance

Appreciable benefits have resulted from centralizing curriculum oversight and standardizing assessment criteria in medical education over the past few decades. Collectively, these changes have prepared the medical education continuum for a more systems-based approach to assessment and program evaluation. The brief review of current practices in medical education contrasted with those advocated by systems thinkers3,16,20,23,33 is not intended as a criticism of medical education per se but rather to consider the potential benefits of further applications of systems principles in its assessment design.

Medical educators emphasize the importance of frequent and constructive feedback to learners for performance improvement. This luxury is rarely extended to teachers and directors of individual curriculum components while they are in progress.34 Medical educators recognize that a multitude of factors affect learning and learner outcomes.19,35–38 Yet, assessment data collected at the completion of an EA are typically routed to an administrative or faculty curriculum committee level where review and interpretations are eventually conducted in the absence of additional information contributing to the outcome results. Delays in the delivery of outcome results to the individuals in the best position to address concerns—learners, faculty, and curriculum directors—limit these individuals’ ability to make perceptible adjustments in their approach to learning, teaching, and EA management for the benefit of all involved.34,37

In contrast, systems frequently monitor short-term and intermediate outcomes,33 as well as compliance with planned contextual structures, throughout each phase of product development or service provision.29 Personalized data are made available to all stakeholders involved in a task to make rapid corrections. At the completion of each stage of development, outcomes are reviewed against defined standards to ensure quality and readiness for advancement to the next stage of development. Excessive variability in outcomes from expectations or noncompliance with procedures or processes signals a broader system problem. These practices ensure that systemic issues are addressed and reinforce the value and relevance of assessment for all stakeholders involved.

Back to Top | Article Outline

Assessment of system performance

Quality assurance in medical education is heavily dependent on individual curriculum component outcomes and retrospective learner ratings and commentary solicited at the completion of individual EAs or stages of training. The impact of turnover in teaching faculty, inclusion of new and remote teaching sites, or curriculum innovations can go unrecognized until significant outcomes deterioration has occurred or overall system performance has been compromised. The excessive reliance on segregated EA outcomes limits consideration of other, equally important, systemic factors contributing to learning and teaching.38,39 Accreditation bodies are increasingly encouraging programs to internally initiate more frequent examinations of outcomes data, especially in areas found to be deficient on accreditation review.11,40 That approach, combined with prospective tracking of outcomes trends and contributors, could reduce the incidence of severe action decisions by accrediting bodies.

In addition to monitoring individual component outcomes, systems perform additional analyses on combinations of data to identify interrelationships contributing to outcomes, patterns suggesting synergistic and competitive interactions, trends in performance, and the root causes of identified problems.26,33 Attention is also paid to the proportional contributions of system design features, including processes and procedures,39 and the predictive value of assessment parameters followed. The meaning of the resulting evidence is deliberated by stakeholders to inform CQI, identify innovations likely to be most effective, and streamline operations.41 The conclusions of these deliberations are shared with all stakeholder groups to extend organizational learning.16,26,27

The adoption of successful educational innovations reported in the literature can sometimes prove to be less effective when transplanted from the original setting to other institutions. Education systems can differ significantly. Plsek20 has studied organizational structures, processes, and patterns of innovation in a number of systems and concludes that multiple factors are closely related to the generation and success of innovative ideas. Coercive strategies have limited value to find a “receptive context in the organizational culture.”20 Providing opportunities for “people to meet, reflect and discuss”20 are much more effective. Stakeholder exchanges on the potential benefits and losses of proposed changes help to identify critical supports required for their success. Stakeholders’ fears about losses can also be addressed by the inclusion of assessment criteria to closely monitor performance in the areas of concern.

Back to Top | Article Outline

Assessment to inform system-wide, long-range planning

Well-functioning systems are purported to be adept in coping with changes in their external environment. This is especially relevant for medical education, which is repeatedly challenged to align its programs’ goals more closely with society’s evolving and projected health care needs.41–47 Response to such demands often necessitates major alterations in curriculum design, governance, and program priorities, raising concerns by individual stakeholder groups.

Resistance to change is not entirely eliminated in well-functioning systems, but it is reduced by systems’ conscious efforts to cultivate a critical mass of stakeholders skilled in systems thinking and informed by organizational learning. Preparation for long-range planning in systems includes assembling evidence of system performance critical to the discussion: longitudinal system performance over several years, feedback from multiple sources on end users’ satisfaction with the system’s product or service, evolving consumer expectations, and research to anticipate likely changes in regulatory requirements. Armed with this information, stakeholders discuss desirable improvements and new capabilities in their product or services in the future. Proposals are considered in the context of the system’s readiness to deliver them and the critical resources and supports needed to successfully implement them.

Back to Top | Article Outline

A Systems-Based Assessment Framework for Medical Education

The potential value of the proposed systems-based framework for assessment in medical education is partially posited on the premise that more active engagement of all stakeholder groups—students, faculty, and curriculum directors—in interpreting the meaning of their programs’ outcomes will advance their systems thinking skills and ultimately result in more relevant and actionable information for program decision making. Individual component outcomes will continue to be collected and available for accreditation purposes but will be complemented by more comprehensive and holistic analyses of overall system performance.

The framework is designed to evaluate system performance at three distinctive levels: individual component performance for immediate corrective actions, overall system performance for quality improvements, and longitudinal system performance for long-range program planning. Summaries of the assessments performed at each level are provided in Table 2.

Table 2

Table 2

Back to Top | Article Outline

Level 1: Assessment of individual component performance

Frequent internal assessments of learning, teaching, and curricula component performance are performed at Level 1 to indicate the need for corrective actions in individual EAs in progress. The results of these assessments are rapidly communicated to stakeholders involved in the EA (learners, faculty, and EA directors) as formative feedback on how effectively core tasks (learning, teaching, and assessment) are currently being performed to allow individuals to make adjustments. Compliance with planned processes and procedures is monitored and available to provide a context in which outcomes can be interpreted. The latter aspect of assessment at this level is especially important for EAs conducted at multiple sites where teaching and learning experiences can vary significantly. Learner participation in these discussions is critical to the recognition of compliance disparities that can result in different outcomes for individual cohorts of learners. The rapid provision of information to individuals engaged in the ongoing tasks allows them to address problems, tangibly demonstrating the benefits of assessment and reinforcing a culture that values it. Similar approaches to assessment during curricula units in progress have been implemented and reported to be successful.34,37

At the conclusion of an EA, final outcomes are reviewed to determine whether learning objectives have been met and learners are ready for advancement or require remediation. Additional discussions of EA outcomes by all involved faculty, students, and EA directors consider what future improvements may be warranted and what questions remain to be addressed more thoroughly by Level 2 analyses.

The example of an assessment performed at Level 1 provided in Box 1 illustrates how this approach facilitated rapidly addressing noncompliance with the intended process to be followed for feedback to learners. Most important, stakeholder discussions identified potential systemic contributors to the concerning situation for more comprehensive exploration at Level 2.

Back to Top | Article Outline

Box 1Level 1 Example of Medical Education Decisions Informed by Systems-Based Assessment: Rapid Correction of Individual Stakeholder and Curriculum Component Performance

Early competency performance assessments of a cohort of trainees on an ambulatory clinical rotation fell below standards expected at this point in the rotation. Compliance monitoring indicated that constructive performance feedback was not reliably provided to learners by all supervising faculty and residents as planned during the scheduled observation sessions during the first two weeks of the rotation.

The EA director initiated formal faculty and resident development sessions on giving feedback. She also required supervisors and learners to document the feedback delivered and received, respectively, during the remainder of the clerkship. At the conclusion of the rotation, learner competence performance outcomes were improved for the majority of learners.

In EA discussions of assessment data, students, residents, and faculty raised different concerns about some of the clinical contexts in which the clinical performance assessments occurred and questioned the appropriateness of some settings for performance assessment and feedback. A formal request was made for Level 2 analyses to explore the association between final learner performance outcomes, quality of documented feedback received, and the clinical setting in which observations were performed (number of patients scheduled and medical staffing for the clinic) for various clinic sites in the rotation. Level 2 analyses subsequently identified a strong relationship between learner competency outcomes and the teaching faculty:patient ratio, as well as teaching faculty:student ratios. These findings led to the development of new guidelines for teaching clinic staffing and patient volume.

Abbreviation: EA indicates educational activity.

Back to Top | Article Outline

Level 2: Assessment for improvement in system performance

Level 2 functions as a collaborative, integrative hub to more thoroughly evaluate system factors contributing to performance outcomes. Comprehensive analyses on combinations of assessment data are planned and interpreted to identify patterns of component interrelationships, trends in learning trajectories, root causes of unexpected outcomes, and the predictive value of various assessment approaches and criteria. The focus of analyses is not standardized but varies depending on questions raised by system stakeholders or concerning variations in EA outcomes. Level 2 evaluations also examine outcomes data for remote effects of individual EA innovations on other curriculum components and system performance.

Participation in Level 2 is inclusive of all stakeholder groups to ensure that analysis planning and interpretations reflect a diversity of perspectives, including those of students. The analyses performed can provide evidence of systemic problems that warrant CQI efforts and systemic issues that require innovative approaches for resolution. Level 2 findings are communicated to specific stakeholders, tailored to their decision-making responsibilities. Summary conclusions, of general interest to all system members, are more widely circulated to advance organizational learning.

As exemplified in the example supplied in Box 2, Level 2 discussions and analyses seek to identify component interactions contributing to unsatisfactory outcomes rather than reflexively assuming the problem is inherent in the component of initial concern. In this case, system-wide process and procedure changes were needed to prevent the recurrence of cumulative excessive burdens placed on students by multiple time-consuming online assignments. Broad stakeholder representation in Level 2 discussions not only raised general awareness of the problem but also highlighted the need to establish guidelines for and coordination of out-of class assignments, as expansion of the flipped classroom approach was encouraged.

Back to Top | Article Outline

Box 2Level 2 Example of Medical Education Decisions Informed by Systems-Based Assessment: Identification of Component Interactions Contributing to Program Performance

Two previously highly regarded courses received lower course evaluation ratings than had been reported for the past several years. The decline was associated with an overall decrease in mean student grades, primarily reflecting an inferior quality of final projects submitted. No changes had occurred in either course’s content or the teaching approaches employed. A new IPE course, introduced into the same curriculum block, received “good” evaluations, but students’ commentary on the IPE course’s evaluation noted its use of time-consuming, Web-based modules and collaborative “outside-of-class” team activities. The latter observations were subsequently substantiated in student focus group discussions.

Based on its analyses, Level 2 deliberations concluded that a process was needed to better coordinate “outside-of-class” time demands on students during individual curriculum segments. Given the institution’s current interest in the use of innovative “flipped classrooms,” Level 2 also initiated monitoring of the time expenditures required for students to complete online activities and group projects. This ultimately resulted in the development of an institutional policy on the use of online and “outside-of-class” assignments that was put into place and periodically reassessed.

Abbreviation: IPE, interprofessional education.

The feasibility and value of inclusive stakeholder interpretation of medical education assessment data have been reported by Stratton et al,16 Goldman et al,17 and Stoddard et al.48

Back to Top | Article Outline

Level 3: Assessment to prepare system for future changes

Ideally, Level 3 evaluations occur at periodic intervals to comprehensively review longitudinal system performance and determine system readiness to prepare learners for evolving changes in the medical education training continuum and health care system. Preparation for long-range planning requires the compilation of several years of information on system performance combined with feedback from former trainees, regulatory bodies, and “end users”—patients, advanced training program directors, and future employers. Collectively, these data allow a program to compare the capabilities of its current graduates with those anticipated for future graduates. Level 3 deliberations, including all program stakeholders, focus on program changes needed to prepare graduates to meet projected expectations, considered in the context of a realistic appraisal of the supports necessary to implement them, including costs in time, effort, and financial support. The comprehensive Level 3 evaluations proposed in our framework are likely to become more important as calls for program accountability44–47 continue and additional sources of information on program graduates’ performance are pursued.49–52

The Level 3 example depicted in Box 3 illustrates how the assessment process at Level 3, informed by Level 2, identified a critical deficiency in a program’s assessment of learners’ competencies. Level 3 evaluations resulted in revisions in the criteria for learner performance assessments to align them more closely with the health care system expectations. These evaluations also prompted the system to initiate monitoring of cost containment performance for all program stakeholders.

Back to Top | Article Outline

Box 3Level 3 Example of Medical Education Decisions Informed by Systems-Based Assessment: Program Changes Needed to Meet Future External Health Care Needs and Expectations

A primary care postgraduate program at a tertiary academic medical center noted a decreased recruitment of its graduates by local primary care practices over the past several years. Informal feedback from current employers of recent program graduates suggested that they tended to be overly dependent on diagnostic tests and procedures, as well as multiple specialty consultations, to make clinical decisions. Prior interval reaccreditation of the program concluded that it was meeting all required expectations. Previous Level 2 analyses found no changes in residents’ overall clinical performance associated with an increased acuity and complexity of patients cared for at the medical center. However, performance criteria currently in use did not specifically track cost of care for faculty or residents.

In view of growing societal and political concerns about rising health care costs, the omission of performance parameters indicative of cost:benefit patient care management was determined to be a critical deficiency in the residency program’s assessment criteria. Inclusive stakeholder discussions concluded that intermediate and final competency assessment criteria should be revised to include cost of care indicators for both residents and faculty and monitored at Level 2. Curriculum changes were introduced in conjunction with the inclusion of a process to provide feedback to residents and faculty on comparative costs and benefits of care.

Back to Top | Article Outline

Discussion

Medical education’s emphasis on the importance of systems thinking would be significantly reinforced by the perceptible utilization of systems thinking and practices within the programs in which learners train and faculty work. The explicit use of a systems approach to assessment provides learners and faculty with firsthand experience in systems thinking and its utility in framing relevant system problems and identifying solutions.

Given medical education’s enthusiasm for advancing systems-based practice in its trainees, one could ask, “Why is this holistic approach not more evident in academic institutions?” Research indicates that individuals have a natural propensity to adopt a reductionist strategy when confronted by excessive amounts of information or complex data and to focus on the most accessible features for decision making.25 Proficiency in systems thinking is promoted by its practical application in one’s work and by primary experience with its benefits in problem solving compared with more linear thinking.20,24,26,53

Organizational infrastructure is another barrier to advancing a systems perspective on one’s institution.25 Narrower views of an organization are observed in rigidly hierarchical institutions and when exposure to other stakeholders and their roles is limited.25,53 More collaborative processes for data interpretation and decision making have been attempted in both undergraduate11,16,17,34,48 and graduate programs.54 Their reported success in promoting systems thinking and more actionable information for decision making provides support for the feasibility and sustainability of the assessment framework we propose.

Despite the potential institutional benefits to be derived from implementing the proposed framework, understandable concerns about the increased stakeholder effort required for its implementation are legitimate. However, recent appeals from thought leaders in higher education13 and medical education accrediting bodies10,11,40 indicate their support for more comprehensive program evaluation. Both groups encourage more frequent, internally initiated review of outcomes data with specific attention to systemic factors that can be improved. Systems scientists maintain that a more holistic approach to assessment and evaluation results in greater efficacy and efficiency, improvement in the predictive value of assessment criteria employed, and more actionable information to guide program decision making. If so, all of these benefits would be valuable in advancing the medical education continuum’s ultimate goal of preparing providers to serve the needs of patients.

Acknowledgments: The authors acknowledge the formative value of their discussions with both faculty and scholars participating in the Harvard Macy Institute program, “A Systems Approach to Assessment in Health Care Profession Education” over the past 10 years. These considerations of medical education from a holistic perspective have significantly contributed to the development and refinement of the framework presented in this paper.

Back to Top | Article Outline

References

1. Nolan TW. Understanding medical systems. Ann Intern Med. 1998;128:293–298.
2. Rouse WB. Health care as a complex adaptive system: Implications for design and management. The Bridge. 2008;38:17–25.
3. Jordon M, Lanham HJ, Anderson RA, McDaniel RR Jr.. Implications of complex adaptive systems theory for interpreting research about health care organizations. J Eval Clin Pract. 2010;16:228–231.
4. Batalden PB, Leach DC. Sharpening the focus on systems-based practice. J Grad Med Educ. 2009;1:1–3.
5. Berwick DM, Finkelstein JA. Preparing medical students for the continual improvement of health and health care: Abraham Flexner and the new “public interest.” Acad Med. 2010;85(9 suppl):S56–S65.
6. Frenk J, Chen L, Bhutta ZA, et al. Health professionals for a new century: Transforming education to strengthen health systems in an interdependent world. Lancet. 2010;376:1923–1958.
7. Ricketts TC, Fraher EP. Reconfiguring health workforce policy so that education, training, and actual delivery of care are closely connected. Health Aff (Millwood). 2013;32:1874–1880.
8. Aschenbrener CA, Ast C, Kirch DG. Graduate medical education: Its role in achieving a true medical education continuum. Acad Med. 2015;90:1203–1209.
9. Holmboe ES. Realizing the promise of competency-based medical education. Acad Med. 2015;90:411–413.
10. Holmboe ES, Batalden P. Achieving the desired transformation: Thoughts on next steps for outcomes-based medical education. Acad Med. 2015;90:1215–1223.
11. Barzansky B, Hunt D, Moineau G, et al. Continuous quality improvement in an accreditation system for undergraduate medical education: Benefits and challenges. Med Teach. 2015;37:1032–1038.
12. Pangaro LN. Two cheers for milestones. J Grad Med Educ. 2015;7:4–6.
13. Kuh GD, Ikenberry SO, Jankowski NA, et al. Fostering Greater Use of Assessment Results in Using Evidence of Student Learning to Improve Higher Education. 2015.San Francisco, CA: Jossey-Bass.
14. Haeckel S. The development and application of organizational knowledge. IBM Syst J. January 30, 1997. http://www.senseandrespond.com/downloads/Knowledge_Dev_ABI_Whitepaper_1997.pdf. Accessed June 5, 2016.
15. Martin CJ. Group sense making in dynamic environments: A complex adaptive system perspective. Talk presented at: Fourth International Conference on Engaged Management Scholarship; September 14, 2014; Tulsa, OK. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2555697. Accessed June 5, 2016.
16. Stratton TD, Rudy DW, Sauer MJ, Perman JA, Jennings CD. Lessons from industry: One school’s transformation toward “lean” curricular governance. Acad Med. 2007;82:331–340.
17. Goldman EF, Swayze SS, Swinehart SE, Schroth WS. Effecting curricular change through comprehensive course assessment: Using structure and process to change outcomes. Acad Med. 2012;87:300–307.
18. Rechtin E. Systems Architecting of Organizations: Why Eagles Can’t Swim. 2000.Boca Raton, FL: CRC Press.
19. Mennin S. Self-organisation, integration and curriculum in the complex world of medical education. Med Educ. 2010;44:20–30.
20. Plsek P. Complexity and the adoption of innovation in health care. Talk presented at: Fourth International Conference on Engaged Management Scholarship; September 10–14, 2014; Tulsa, OK. http://www.nihcm.org/pdf/Plsek.pdf. Accessed June 5, 2016.
21. Spears S, Bowen HK. Decoding the DNA of the Toyota production system. HBR. September–October 1999:96–106
22. Colbert CY, Ogden PE, Ownby AR, Bowe C. Systems-based practice in graduate medical education: Systems thinking as the missing foundational construct. Teach Learn Med. 2011;23:179–185.
23. Senge P. The leader’s new work: Building learning organizations. Sloan Manage Rev. 1990;32:7–22.
24. De Savigny D, Adam T. Systems Thinking for Health System Strengthening. 2009.Geneva, Switzerland: WHO Press.
25. Valerdi R, Rouse WB. When systems thinking is not a natural act. Talk presented at: 4th Institute of Electrical and Electronics Engineers (IEEE) Annual Systems Conference; April 5–8, 2010; San Diego, CA. http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=5482446&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D5482446. Accessed June 5, 2016.
26. Senge P. The Fifth Discipline. 1990.New York, NY: Doubleday.
27. Kim DH. The link between individual and organizational learning. Sloan Manage Rev. 1993;25:37–50.
28. Mintzberg H, Van der Heyden L. Organigraphs: Drawing how companies really work. Harv Bus Rev. 1999;77:87–94, 184.
29. Williams B, Imam I. Systems Concepts in Evaluation: An Expert Anthology. 2007.San Rafael, CA: EdgePress of Inverness.
30. Shook J. How to Change a Culture: Lessons From NUMMI. MIT Sloan Manage Rev. Winter 2010. http://sloanreview.mit.edu/article/how-to-change-a-culture-lessons-from-nummi/. Accessed June 5, 2016.
31. Lamb CT, Rhodes DH. Collaborative systems thinking: Uncovering the rules of team-level systems thinking. Talk presented at: Massachusetts Institute of Technology 3rd Annual IEEE Systems Conference; March 26, 2009; Vancouver, British Columbia, Canada. http://ieeexplore.ieee.org/xpl/login.jsp?tp=&arnumber=4815837&url=http%3A%2F%2Fieeexplore.ieee.org%2Fxpls%2Fabs_all.jsp%3Farnumber%3D4815837. Accessed June 5, 2016.
32. Garvin DA. Building a learning organization. Harv Bus Rev. 1993;71:78–91.
33. Eoyang G, Berkas T. Lissack M, Gunz H. Evaluating performance in a complex adaptive system (CAS). Managing Complexity in Organizations: A View in Many Directions. 1999.Westport, CT: Quorum Books.
34. Goldfarb S, Morrison G. Continuous curricular feedback: A formative evaluation approach to curricular improvement. Acad Med. 2014;89:264–269.
35. Christensen L, Karle H, Nystrup J. Process–outcome interrelationship and standard setting in medical education: The need for a comprehensive approach. Med Teach. 2007;29:672–677.
36. Bordage G, Harris I. Making a difference in curriculum reform and decision-making processes. Med Educ. 2011;45:87–94.
37. Ricketts C, Bligh J. Developing a “frequent look and rapid remediation” assessment system for a new medical school. Acad Med. 2011;86:67–71.
38. Hafferty FW. Beyond curriculum reform: Confronting medicine’s hidden curriculum. Acad Med. 1998;73:403–407.
39. Armstrong EG, Mackey M, Spear SJ. Medical education as a process management problem. Acad Med. 2004;79:721–728.
40. Hunt D, Migdal M, Eaglen R, Barzansky B, Sabalis R. The unintended consequences of clarity: Reviewing the actions of the Liaison Committee on Medical Education before and after the reformatting of accreditation standards. Acad Med. 2012;87:560–566.
41. Warm EJ. Interval examination: The ambulatory long block. J Gen Intern Med. 2010;25:750–752.
42. Aretz HT. Some thoughts about creating healthcare professionals that match what societies need. Med Teach. 2011;33:608–613.
43. Lindgren S, Gordon D. The doctor we are educating for a future global role in health care. Med Teach. 2011;33:551–554.
44. Baron RB. Can we achieve public accountability for graduate medical education outcomes? Acad Med. 2013;88:1199–1201.
45. Pershing S, Fuchs VR. Restructuring medical education to meet current and future health care needs. Acad Med. 2013;88:1798–1801.
46. O’Malley PG, Pangaro LN. Research in medical education and patient-centered outcomes: Shall ever the twain meet? JAMA Intern Med. 2016;176:167–168.
47. Headrick LA, Ogrinc G, Hoffman KG, et al. Exemplary care and learning sites: A model for achieving continual improvement in care and learning in the clinical setting. Acad Med. 2016;91:354–359.
48. Stoddard HA, Brownfield ED, Churchward G, Eley JW. Interweaving curriculum committees: A new structure to facilitate oversight and sustain innovation. Acad Med. 2016;91:48–53.
49. Asch DA, Nicholson S, Srinivas SK, Herrin J, Epstein AJ. How do you deliver a good obstetrician? Outcome-based evaluation of medical education. Acad Med. 2014;89:24–26.
50. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312:2385–2393.
51. Peterson LE, Carek P, Holmboe ES, Puffer JC, Warm EJ, Phillips RL. Medical specialty boards can help measure graduate medical education outcomes. Acad Med. 2014;89:840–842.
52. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174:1640–1648.
53. Davidz HL, Nightingale DJ, Rhodes DH. Enablers and barriers to systems thinking development: Results of a qualitative and quantitative study. Talk presented at: Proceedings of the Conference on Systems Engineering Research; March 23–25, 2005; Hoboken, NJ. http://citeseerx.ist.psu.edu/viewdoc/download;jsessionid=0CD4BBC44BC73D402DE167F7402E4809?doi=10.1.1.564.682&rep=rep1&type=pdf. Accessed June 5, 2016.
54. Curry RH, Burgener AJ, Dooley SL, Christopher RP. Collaborative governance of multiinstitutional graduate medical education: Lessons from the McGaw Medical Center of Northwestern University. Acad Med. 2008;83:568–573.
© 2017 by the Association of American Medical Colleges