Skip Navigation LinksHome > August 2013 - Volume 88 - Issue 8 > From Flexner to Competencies: Reflections on a Decade and t...
Academic Medicine:
doi: 10.1097/ACM.0b013e318299396f
Perspectives

From Flexner to Competencies: Reflections on a Decade and the Journey Ahead

Carraccio, Carol L. MD; Englander, Robert MD

Free Access
Article Outline
Collapse Box

Author Information

Dr. Carraccio is vice president of competency-based assessment, American Board of Pediatrics, Chapel Hill, North Carolina.

Dr. Englander is senior director, Competency-based Learning and Assessment, Association of American Medical Colleges, Washington, DC.

Correspondence should be addressed to Dr. Carraccio, American Board of Pediatrics, 111 Silver Cedar Ct., Chapel Hill, NC 27514; telephone: (919) 929-0461; e-mail: ccarraccio@abpeds.org.

Collapse Box

Abstract

This article is a sequel to one published in 2002 only a few years after the initiation of the shift to competency-based medical education (CBME). The authors reflect on the major forces that have influenced the movement and tipped the balance toward widespread adoption of CBME in the United States, primarily in graduate medical education. These forces include regulatory bodies, international counterparts, and the general public.

The authors highlight the most important lessons learned over the decade. These include (1) the need for standardization of language to develop a shared vision of the path ahead, (2) the power of direct observation in assessment, (3) the challenge of developing meaningful measures of performance, (4) desired outcomes as the starting point for curriculum development, (5) dependence on reflection in the development of expertise, (6) the need for exploiting the role of learners in their learning, and (7) competent clinical systems as the required learning environment for producing competent physicians.

The authors speculate on why this most recent attempt to shift to CBME differs from previous aborted attempts. They conclude by explaining how the recent lessons learned inform the vision of what successful implementation of CBME would look like, and discussing the importance of milestones, entrustable professional activities, and an integrated, rather than a reductionist, approach to assessment of competence. The fundamental question at each step along the way in implementing CBME should be “How do we improve medical education to provide better care for patients?”

In 2002, we and others1 published a manuscript in Academic Medicine entitled “Shifting paradigms: From Flexner to competencies” that reviewed the history of competency-based medical education (CBME) and the steps needed for implementation. We speculated on the reasons for past failures to reach the tipping point that would allow widespread adoption. We wrote the present article to (1) reflect on the progress made in CBME in the United States over the past decade, highlighting the lessons learned, and (2) apply these lessons to inform the challenges and opportunities we face as we look toward the horizon.

Back to Top | Article Outline

Reflections on the Past Decade: Progress Is Not Always Easy …

As with any important change, great resistance emerged when the Accreditation Council for Graduate Medical Education (ACGME) and the American Board of Medical Specialties (ABMS) adopted in 1999 what have become known as “the ACGME competencies.” As a result, all graduate medical education (GME) training programs were required to phase in the teaching and assessment of the competencies within each of the six domains of patient care, medical knowledge, practice-based learning and improvement, interpersonal and communication skills, professionalism, and systems-based practice (SBP).2 While some regulatory bodies were mandating the competencies and many professional associations were embracing them, the medical education community was divided at best in its acceptance of this frame shift.

Back to Top | Article Outline

… But We Have Seen Slow and Steady Progress

Despite the dissenters, we have seen slow and steady progress in the CBME movement.2 We hypothesize a number of contributors to the relentless march of CBME: (1) the power of regulatory bodies in requiring its implementation, (2) the expanding adoption of the competency framework worldwide, and (3) the outcry from the public about physician accountability and quality of care. We will examine each in turn.

The sentinel events behind the adoption of CBME in the United States have been the ACGME’s development and enforcement of their requirements for teaching and assessing the competencies for all accredited residency and fellowship training programs. Their message was reinforced by the ABMS, which led to member boards requiring documentation of competence in the six domains for initial certification as well as requiring diplomates to engage in maintenance of certification (MOC) programs built on the competency framework.3 At the undergraduate medical education (UME) level, the Liaison Committee on Medical Education adopted standard ED-1-A thatstates:

The objectives of a medical education program must be stated in outcome-based terms that allow assessment of student progress in developing the competencies that the profession and the public expect of a physician.4

However, the shift to CBME in UME has been slow, at least in part because of the absence of standardization of the specific educational outcomes, or competencies, required of the medical school graduate. The moral of the story is that when regulatory bodies sink their teeth in, shift happens!

International efforts around CBME have also proliferated, helping drive us toward the “tipping point” in the U.S. medical education system. Published frameworks such as the Scottish doctor learning outcomes5 and the CanMEDS roles6 (medical expert, communicator, collaborator, manager, health advocate, scholar, and professional) took hold and continue to evolve. In 2009, the Royal College of Physicians and Surgeons of Canada, led by Jason Frank, MD, convened an international “theory-to-practice consensus conference” to bring thought leaders from around the world together to build consensus regarding definitions and foundational principles of CBME.7 These thought leaders have developed an organized group, the International Competency-based Medical Education Collaborators, publishing the proceedings of the summit in the August 2010 issue of Medical Teacher8 and continuing to share experiences, pioneering efforts, and lessons learned through an ongoing series of conference calls.

Perhaps the most compelling catalyst for adoption of CBME has been the public indictment of the current health care system and its practitioners. The seminal Institute of Medicine (IOM) publication To Err Is Human—Building a Safer Health Care System9 exposed a vulnerability in patient safety and concern about a breach of the public trust. The resultant focus on the quality of care led the IOM to publish its quality indicators,10 which demand that care be safe, effective, efficient, timely, patient-centered, and equitable. These indicators underscored the gaps in the performance of both individual practitioners and the heath care system. One effect has been that whereas medical educators initially questioned the importance of the ACGME domains of competence other than patient care and medical knowledge, especially practice-based learning and improvement (PBLI) and SBP, these latter competencies are now embraced as critical to the solutions that will allow us to improve the quality of health care and care delivery systems. As a self-regulating profession, the privilege bestowed is matched by the responsibility to address the public needs and the public trust. A hallmark of CBME is that it is driven by the health needs of populations and the health systems that serve those populations.11

Back to Top | Article Outline

Setting the Stage for the Current State of Medical Education

After the ACGME mandated the shift to CBME in GME programs, the ACGME called on the medical education community to be the implementation arm by encouraging experimentation in teaching and assessing the competencies and adopting best practices that emerged from the field. But this has proven to be a struggle, which, based on history, is not surprising.

Our prior review of the literature1 on attempts to shift to CBME in the 1970s and 1980s suggested a four-step implementation process: (1) competency identification, (2) determination of competency components and performance levels, (3) competency assessment, and (4) overall evaluation of the process. On the basis of this review, we speculated that the reason for the lack of previous success was the inability to adequately address step 3, competency assessment. Although this has not stopped the rejuvenation of the movement, it remains the major challenge. Of note, with the exception of national tests of knowledge, the need for reliable and valid assessment tools is not unique to CBME but characterized the structure–process system of education that preceded it. The struggle to develop tools to meaningfully assess competencies—complex constructs, each of which represents an integration of knowledge, skills, and attitudes—resulted in a reductionist approach, using behavioral checklists.12 Although checklists serve an important role in determining one’s ability to accomplish steps in task-oriented activities, they fall short of assessing whether a trainee is capable of integrating the requisite behaviors to deliver safe and effective care to patients. This stumbling block has reaffirmed the position of those who believe that CBME is a fad, because assessment has not yielded the large-scale proof that they are demanding—namely, that CBME produces better doctors. As this quest for the Holy Grail continues, the rich conversations and heated debates have shed light on some critical issues in medical education.

Back to Top | Article Outline

Lessons Learned

Over the course of the decade, some fundamental lessons have emerged. We posit the following as perhaps having the most influence on the adoption of CBME to date and in the future: (1) Standardizing language is critical but not yet complete, (2) guided direct observation of learners has great potential for more accurately assessing competence, (3) meaningful assessment requires a panoramic view of trainee performance that focuses on meaningful measures, (4) outcomes really do drive curriculum, (5) CBME is about learners on a trajectory to expertise and mastery, and reflection is a critical component of the development of expertise, (6) we have only scratched the surface of exploiting the role of the learner in learning, and (7) developing competent individuals requires that they train in competent care delivery systems. We discuss each of these lessons below.

Back to Top | Article Outline
Standardizing language

Standardization of language is critical to adaptive change. At the same time that the ACGME was developing the six domains of competence, the Association of American Medical Colleges (AAMC) developed four domains of competence in the Medical School Objectives Project: knowledgeable, skillful, altruistic, and dutiful.13 Although the essences of the two may have been convergent, the divergent language prohibited formation of a shared mental model and may have served to further separate the UME and GME silos, as teachers within those two silos had different images of what they were teaching and assessing. The AAMC has subsequently supported the ACGME domains14 as they have come into common parlance, as well as an additional domain, interprofessional collaboration, that emerged from the work of six health care organizations that constitute the Interprofessional Education Collaborative (IPEC).15 A second new domain, personal and professional development (PPD), has been incorporated into the Pediatric Milestone Project as a result of feedback from pediatric program directors about what was missing from the ACGME framework.16 Although most of the competencies within these two new domains could potentially be subsumed by the ACGME domains, they were made explicit to highlight their importance in the formation of the 21st-century doctor. Going forward, the current convergence of both essence and language in CBME should accelerate our ability to both study what we do and apply it across the educational continuum.

In addition to the divergence of language around the targeted domains of competence, the education community has struggled with the common definitions of the specific terminology of competence and CBME. Fernandez and colleagues17 found agreement through a search of the literature that competence is composed of “knowledge, skills and other components,” but they were unable to uncover consensus on the components. Some suggest the latter may be personal characteristics, attitudes, or values. We are perhaps closer to a definition of CBME, as Frank et al18 have recommended the following on the basis of an extensive literature search and extraction of themes:

Competency-based education (CBE) is an approach to preparing physicians for practice that is fundamentally oriented to graduate outcome abilities and organized around competencies derived from an analysis of societal and patient needs. It de-emphasizes time-based training and promises greater accountability, flexibility, and learner-centredness.

Back to Top | Article Outline
Direct observation

There are two crucial points that have emerged as we have attempted to assess learners through direct observation. The first is that our preconceived notion of what learners are capable of doing may result in a significant gap between expectations and performance. Joorabchi and Devries,19 for example, developed an objective structured clinical examination and worked with faculty members to develop minimum pass levels for residents in each year of training. Their results showed that only 59% of first-year, 45% of second-year, and 4% of third-year residents met the predetermined standard. The second point is well articulated by Green and Holmboe,20 who say that “the biggest problem in evaluating competencies is … not the lack of adequate assessment instruments but, rather, the inconsistent use and interpretation of those available by unskilled faculty.” Kogan et al,21 in their qualitative study of the elements that influence faculty ratings of learners, demonstrate the wide variability in expectations and comparison standards. The work of Holmboe22 in direct observation outlines a three-step process to enable faculty to develop a shared mental model of learner performance: (1) behavioral observation training that outlines the general process (preparation, practice, and tools to aid observation), (2) performance dimension training to ensure that faculty understand the specific behaviors they are to assess and the criteria for assessing them, and (3) frame-of-reference training in which a group of faculty come to consensus about what behaviors constitute a given level of performance. A controlled trial of faculty development to improve skills in direct observation demonstrated a change in rater behavior up to eight months post intervention compared with a control group, providing evidence to support the development of faculty observation skills in mitigating the challenges of competency assessment.23

Back to Top | Article Outline
Meaningful measures of performance

The introduction of CBME heralded rigorous debate around meaningful measures of performance. This provided a stark contrast to the complacency that accompanied the traditional assessment of most learners based on a single global homegrown tool that faculty completed at the end of a given learning experience. However, progress toward those meaningful measures remains the Achilles heel of CBME. The complexity of assessing the competencies initially led to the reductionist approach noted above. Educators broke them down into smaller and smaller fragments of behaviors that could be directly observed, and assessed them using a checklist. In trying to make something simple that is inherently complex, our tools allowed us to judge whether learners could perform simple tasks but not whether they were capable of integrating those tasks to care for patients. Van der Vleuten and Schuwirth12 highlight the major pitfall of this approach, saying that “[a]tomisation may lead to trivialization and may threaten validity and, therefore, should be avoided.” We can mitigate this threat to validity by embracing the complexity of the competencies; that is, looking for ways to assess the “whole,” which in care delivery is greater than the sum of its parts. This will require us to accept qualitative as well as quantitative assessments, which compounds concerns about our ability to develop robust tools with acceptable psychometric properties.

Van der Vleuten24 has proposed a “utility model” for judging the value of assessment tools that reaches beyond the traditional psychometric properties. He suggests that the utility of a tool is the product of its reliability, validity, cost, acceptability, and educational impact. He highlights a few key messages about the individual factors as well as the interplay among the factors in this expanded perspective. For example, reliability is not inherent to a tool, nor does it equate with objectivity; in fact, subjective tools can be reliable.12 The essence of reliability is based on adequate and appropriate sampling of the domain to be assessed. Cumbersome and costly tools will not be used and thus negatively influence reliability through insufficient sampling.

Additionally, Van der Vleuten24 suggests that context plays a major role in contributing to validity evidence. The use of workplace-based assessments that measure the “does” (action), which is at the top of Miller’s25 pyramid of performance indicators for learner assessment, is an example of an approach that provides evidence of validity. Importantly, however, acceptability of a tool has a critical impact on faculty buy-in of workplace-based assessment, particularly as the shift to CBME has expanded the domains of skills to be assessed. Studying the implementation of assessment tools in real-world settings—what works and what doesn’t for the faculty using the tool—thus becomes as critical as demonstrating its reliability and validity.

Finally, we want to underscore the importance of the educational impact to the value of an assessment tool. Tools that may sacrifice a bit on reliability or validity but score high on impact may do more to advance learning. Portfolios have received much criticism because of the difficulty of reliably assessing them. There are ways to address the concerns, however, and one could argue that what is lost in reliability is gained in the educational impact of actively engaging learners in their learning and assessment.26,27 In the words of Friedman Ben-David,28 “The assessment exercise [itself] becomes the teachable moment.”

In summary, the challenges to assessment in CBME are real, but we can address them by (1) improving reliability through frequent sampling, (2) reducing cost and increasing acceptability by building assessment into our daily work and studying the issues in implementation, (3) providing validity by bringing those assessments to the authentic clinical environment and aligning what we measure with what we do, and (4) adding impact by making assessment a “team sport” for learners and their evaluators. Imagine, for example, a hospitalist with a smartphone application that allows direct and real-time assessment of learner performance that is electronically delivered to that learner as well as to the team of faculty responsible for that learner’s assessment in real time. Although that may have seemed highly futuristic when we started our journey toward CBME, this point-of-care technology is currently being piloted in the pediatrics community.29

Back to Top | Article Outline
Outcomes drive curriculum

The past decade has underscored the importance of defining the outcomes that should drive curriculum. One example is the delineation of PBLI as a domain of competence and the resultant explosion in quality and safety curricula at the UME and GME levels and the addition of Parts 2 and 4 to MOC.3 Another example has been the inclusion of a specific competency in interprofessional teamwork with its resultant proliferation of activity and curricula around interprofessional education. The recent expansion of the outcomes defined by the Interprofessional Education Collaborative has attracted funding for its efforts and will no doubt catalyze further development of interprofessional education programs.15

Back to Top | Article Outline
Expertise is the ultimate goal of CBME and requires reflective practice

Paradoxically, the goal of CBME is not necessarily “competence” per se but, rather, the continual pursuit on a trajectory toward expertise or mastery, with competence defined as demonstrating the expected performance for a given level of experience. This paradox has resulted in many educators’ embracing the Dreyfus and Dreyfus model30 of skill acquisition to assess learners as they progress along a continuum from novice to expert. Advancement toward expertise requires that physicians think about and analyze their practice every step of the way. The continual investment in learning for the sake of improvement has been described by Epstein31 as “mindful practice” and Ericsson32 as “deliberate practice.” If one does not engage in reflection for, in, and on practice, one will become what Bereiter and Scardemalia33 describe as an “experienced non-expert.” Only through continuous analysis of what one does and what happens as a result of the doing can one become a true expert. There is literature to suggest that reflective skills need not be innate and can be taught, highlighting a critical role of mentors in helping learners progress beyond competence toward expertise.34

Back to Top | Article Outline
Exploiting the role of learners in learning

Active engagement in learning is a foundational principle of CBME.1 Medical educators, however, in designing training programs, often fail to take into account the existing literature on motivation to learn, the neurobiology of learning, and how one learns.35–37 This gap becomes more noticeable in CBME as the playing field levels, the traditional roles of the learner and teacher shift, and the learners are expected to take ownership of their education. The construct of self-directed learning has received much attention and debate as the paradigm of education shifts. Much of the debate about whether one can be self-directed results from a narrow interpretation of Knowles’38 original definition, ignoring his call for external input as a fundamental element. Without external input, our capability to direct our own learning is poor as a result of our flawed ability to accurately self-assess.39 However, using Knowles’ original definition—in which the self-directed learner invokes the help of others—combined with the literature on motivation, neurobiology, and strategies for learning, medical educators can and should redesign and improve on existing approaches to learning.35–37 In addition, educators need to be explicit in what they are role modeling (reflecting out loud with learners), as the literature suggests behavioral change is more likely when we call attention to and provide the rationale for our actions.40

Although it is beyond our scope to review learning theories per se, self-determination theory warrants brief mention because of its relevance to CBME. This theory speaks to the innate need for competence, autonomy, and relatedness as driving the desire to learn.35,41 This desire can be kindled or thwarted by faculty and the learning environment. We kindle the desire to learn, addressing the innate need for competence, autonomy, and relatedness, respectively, by (1) sequencing learning activities (simple to complex) to encourage learning capacity and then providing formative feedback that enhances competence, (2) adjusting the degree of supervision to align with the degree of learner competence, serving the dual role of ensuring safe care and encouraging learners’ professional development, and (3) building relationships between learners and their patients, faculty, and interprofessional team members. The innate need for relatedness cannot be overemphasized. Recent evidence suggests that acceptance and incorporation of feedback are dependent on the receiver’s perceptions of the giver’s investment in their professional formation,42,43 calling into question the fragmentation of relationships resulting from the typical block structure of medical school clerkships and residency training programs.44 Training models that provide longitudinal experiences have demonstrated that the typical erosion in professionalism that occurs during the third year of medical school is significantly reduced when students have yearlong meaningful relationships with faculty and patients.45 This evidence invokes a call to action.

Back to Top | Article Outline
Competent systems are a prerequisite for training competent practitioners

Although the ACGME competencies are context-independent, the framework of competency-based education underscores the critical role that context plays, including the importance of the clinical microsystem in which one trains. In a seminal study, Asch et al46 retrospectively analyzed the 4,906,169 deliveries in New York and Florida between the years 1992 and 2007. A total of 4,124 obstetricians, representing 107 U.S. training programs, performed the deliveries. Using nine measures of maternal complications, they found that obstetricians from training programs that were in the bottom quintile for risk-standardized major maternal complications had an adjusted complication rate that was one-third higher than those from programs in the top quintile. The implications should give pause to the reader. The outcome of this work clearly demonstrates that the “competence” of the clinical environment in which one trains affects the competence of the trainee and can be likened to imprinting during a critical phase of professional formation. Asch and colleagues’ work suggests that an inadequate clinical environment can thwart the professional development of its trainees even in the most well-fashioned competency-based education systems. This has important implications for the accreditation of training programs.

Back to Top | Article Outline

Recent Advances: The Milestone Project and Entrustable Professional Activities

In 2009, the ACGME again partnered with member boards of the ABMS in initiating the Milestone Project47 as the next step in advancing CBME. Each specialty was charged with (1) defining and refining the language of the competencies within the context of the specialty, (2) identifying expected levels of performance at the completion of each year of GME training, and (3) identifying and/or developing national tools to assess milestones. All specialties will be required to begin assessing milestones of trainees with the advent of “the next accreditation system,” which began its phase-in effective July 2013.48

We were members of the Milestone Working Group for Pediatrics, which created a series of milestones for each competency within the ACGME domains, as well as for the domain of personal and professional development that was added by our community. The process for their development has previously been addressed.15 The end product is a series of brief narratives describing behaviors at each performance level or milestone, across the developmental continuum from entering medical student through expert practitioner, for a given competency.49 The value of milestones is related to (1) the descriptions of the competencies in terms of observable behaviors that make sense to faculty and learners, (2) the learning road map they provide to trainees, and (3) the specific content that creates a foundation for formative feedback and assessment. Their contribution to assessment is at a granular level—that is, in the direct observation of learners demonstrating the knowledge, skills, and attitudes of a specific competency, such as gathering essential and accurate information about the patient or performing a thorough physical examination. What the milestones do not address is the integration of competencies across domains, which is requisite for unsupervised care delivery, such as admitting an acutely ill patient to the hospital. A learner may be able to gather essential and accurate information about the patient and perform a thorough physical examination but may not be able to synthesize this information to provide safe and effective care to the patient. Perhaps Regehr et al50 state the challenge best:

… the solution to improving evaluations may not lie in training faculty to observe and document better or to make minor modifications to existing tools and scales. Rather … efforts at improving clinical performance measures might more profitably focus on fundamentally rethinking the structure of the tools we are using, to ensure that the instruments authentically represent the way in which faculty functionally conceptualize their residents’ clinical competence on a day-to-day basis. What is needed now is the development of methods that will allow faculty members’ subjective representations of their residents’ performance to be smoothly translated into some form of documentation.

This, and other work by Ginsburg et al,51,52 has emphasized both the value of expert judgment and a more integrated approach to assessment of competencies. Taking this one step further, the use of milestones provides the scaffolding for a shared mental model of performance to inform expert judgment. Similarly, only when the milestones are put into meaningful clusters and embedded in a clinical context, as is possible with entrustable professional activities (EPAs),53 can they provide a holistic perspective on learner assessment.

EPAs, introduced by ten Cate and Scheele,53 provide one potential solution to addressing the importance of clinical context to a holistic assessment of learners. EPAs are the routine units of work that define a profession and thus embed the context-independent competencies and their milestones in a clinical context.53,54 An example EPA in pediatrics would be “care for the well newborn,” which would require an integration of competencies across the ACGME domains.54,55 One can map an EPA to domains of competence, competencies within the domains, and their respective milestones. This mapping process needs to be judicious for purposes of addressing the feasibility, cost, and educational impact components of van der Vleuten’s24 utility model of assessment and, thus, must be limited to the critical competencies required for entrustment. In this case, entrustment is defined as effective practice without supervision. As presented in Table 1, once mapping is completed, creating a matrix by juxtaposing the competencies (represented by the rows) and the EPAs (represented by the columns) helps to advance our thinking about meaningful assessment of learners. This crosswalk of competencies and EPAs is fundamental to developing a blueprint for curriculum and assessment that ensures that the identified EPAs embrace all competencies needed to practice in the 21st century, such as practice analysis and work in interprofessional teams—skills that were not embraced as routine units of work prior to the ACGME Outcome Project.2 The EPAs provide a holistic view of learners as they integrate competencies across domains to deliver care and thus complement the granular lens of milestones assessment. Seeing a learner from both vantage points is critical to competency-based assessment. We must be able to both see integrated performance (the EPA) and diagnose the underlying root cause of performance difficulties (the competencies and their respective milestones) to help our learners continually improve.

Table 1
Table 1
Image Tools
Back to Top | Article Outline

The Horizon

We are at a pivotal point in medical education. While the challenges remain, the opportunities abound. The competency-based movement of the last century failed, begging the questions “What is and will be different this time?” and “How will we recognize success?” We will address both questions in sequence.

Back to Top | Article Outline
What is and will be different this time in the widespread adoption of CBME?

In contrast to prior unsuccessful attempts, a number of factors serve to improve the likelihood that CBME is on the verge of widespread adoption. First, regulators from accrediting and certifying bodies, leaders that are innovators and early adopters, and public demand for improved quality in health care have already led us far beyond the 20th-century advances in CBME and will not allow us to regress. Second, we are seeing the widespread support of the competencies and CBME spreading beyond GME to the UME and MOC arenas. In addition, there is more collaboration among organizations and specialties and across the continuum than ever before. We have begun to break down the traditional silos, allowing us to address both efficacy and efficiency in delivering curricula. Third, through better agreement on the desired outcomes of the practicing physician, we have begun a process of backwards visioning to better delineate the desired competencies of the learner at the GME, UME, and even college levels. This combination of factors has brought us to the “tipping point” in the adoption of CBME.

Back to Top | Article Outline
What will successful adoption of CBME look like?

Our lessons learned from the past decade offer a vision for successful widespread adoption of CBME. First, we will have standardized the language and the desired outcomes so that we share a clear mental model of the trajectory to becoming the “expert” physician. Training will occur in competent institutions that have high-quality outcomes and the capacity to train competent learners who will continually work to improve the care they deliver to patients. Focused on the desired outcomes of the practicing physician, we will backwards-vision the most effective and efficient path for curriculum and equip ourselves with the evidence-based learning strategies that are emerging to take us where we need to go, each step along the educational trajectory building on the previous one. We will travel with our learners as part of an interprofessional team, with all team members being responsible and accountable for their learning. We will have built in “rest stops” along the way for assessment and guided reflection that will take us all toward expertise and mastery. The assessment tools that we use will embrace the complexity of care delivery and focus on what is meaningful and not just what is easily measurable. These tools will be useful by van der Vleuten’s24 standards, some qualitative and some quantitative, most targeted for formative assessment. The journey will be long so that we will have ample time to directly observe learners. From early in the process, learners will develop relationships with patients, mentors, and health care team members. These relationships will help to thwart the typical erosion of professionalism and personal accountability that occurs when there is no sense of belonging. Everyone will walk this journey at their own pace, some arriving sooner than others. We have to be prepared for and take advantage of this variation among learners, and model how to help them help their colleagues along the way who may be struggling.

The stars are aligning to guide us on this journey. The convergence of the coming of age of medical education research with the quality movement in health care provides the path forward. Clinician educators must walk side by side with quality improvement experts and medical education researchers to design studies of educational interventions that link the educational outcomes with patient care outcomes. The mantra for the journey will be “How do we improve medical education to provide better care to patients?” This is the fundamental question that we are responsible for asking and answering.

Back to Top | Article Outline

Acknowledgments

Funding/Support: None.

Other disclosures: None.

Ethical approval: Not applicable.

Back to Top | Article Outline

References

1. Carraccio C, Wolfsthal SD, Englander R, Ferentz K, Martin C. Shifting paradigms: From Flexner to competencies. Acad Med. 2002;77:361–367

2. Swing SR. The ACGME outcome project: Retrospective and prospective. Med Teach. 2007;29:648–654

3. American Board of Medical Specialties. . ABMS maintenance of certification. http://www.abms.org/Maintenance_of_Certification/ABMS_MOC.aspx. Accessed April 22, 2013

4. Liaison Council on Medical Education. . Standards for Accreditation of Medical Education Programs Leading to the M.D. Degree. http://www.lcme.org/functions.pdf. Accessed April 22, 2013

5. Simpson JG, Furnace J, Crosby J, et al. The Scottish doctor—Learning outcomes for the medical undergraduate in Scotland: Afoundation for competent and reflective practitioners. Med Teach. 2002;24:136–143

6. Frank JR, Danoff D. The CanMEDS initiative: Implementing an outcomes-based framework of physician competencies. Med Teach. 2007;29:642–647

7. Frank JR, Snell LS, Cate OT, et al. Competency-based medical education: Theory to practice. Med Teach. 2010;32:638–645

8. . Competency-Based Medical Education Special Issue. Med Teach. 2010;32:629–691 [summit proceedings]

9. Kohn LT, Corrigan JM, Donaldson MS To Err Is Human—Building a Safer Health Care System. 1999 Washingon, DC National Academies Press

10. Institute of Medicine. Crossing the Quality Chasm: A New Health System for the 21st Century. 2001 Washington, DC National Academies Press

11. Frenk J, Chen L, Bhutta ZA, et al. Health professionals for a new century: Transforming education to strengthen health systems in an interdependent world. Lancet. 2010;376:1923–1958

12. van der Vleuten CP, Schuwirth LW. Assessing professional competence: From methods to programmes. Med Educ. 2005;39:309–317

13. Association of American Medical Colleges. Learning Objectives for Medical Student Education: Guidelines for Medical Schools. 1998 Washington, DC Association of American Medical Colleges

14. Association of American Medical Colleges. Implementing the Vision: Group on Educational Affairs Responds to the IIME Dean’s Committee Report. Washington, DC Association of American Medical Colleges September 2006. https://members.aamc.org/eweb/upload/ImplementingtheVision.pdf. Accessed April 24, 2013

15. Interprofessional Education Collaborative Expert Panel. Core Competencies for Interprofessional Collaborative Practice: Report of an Expert Panel. 2011 Washington, DC Interprofessional Education Collaborative http://members.aamc.org/eweb/upload/Core%20Competencies%20Interprofessional%20Collaborative%20Practice_Revised.pdf. Accessed April 24, 2013

16. Hicks PJ, Schumacher DJ, Benson BJ, et al. The Pediatrics Milestones: Conceptual framework, guiding principles, and approach to development. J Grad Med Educ. 2010;2:410–418

17. Fernandez N, Dory V, Ste-Marie LG, Chaput M, Charlin B, Boucher A. Varying conceptions of competence: An analysis of how health sciences educators define competence. Med Educ. 2012;46:357–365

18. Frank JR, Mungroo R, Ahmad Y, Wang M, De Rossi S, Horsley T. Toward a definition of competency-based education in medicine: A systematic review of published definitions. Med Teach. 2010;32:631–637

19. Joorabchi B, Devries JM. Evaluation of clinical competence: The gap between expectation and performance. Pediatrics. 1996;97:179–184

20. Green ML, Holmboe E. Perspective: The ACGME toolbox: Half empty or half full? Acad Med. 2010;85:787–790

21. Kogan JR, Conforti L, Bernabeo E, Iobst W, Holmboe E. Opening the black box of clinical skills assessment via observation: A conceptual model. Med Educ. 2011;45:1048–1060

22. Holmboe ESHolmboe E, Hawkins R. Direct observation by faculty. Practical Guide to the Evaluation of Clinical Competence. 2008 Philadelphia, Pa Mosby Elsevier

23. Holmboe ES, Hawkins RE, Huot SJ. Effects of training in direct observation of medical residents’ clinical competence: A randomized trial. Ann Intern Med. 2004;140:874–881

24. Van der Vleuten CPM. The assessment of professional competence: Developments, research and practical implications. Adv Health Sci Educ. 1996;1:41–67

25. Miller GM. The assessment of clinical skills/competence/performance. Acad Med. 1990;65(9 suppl):S63–S67

26. Tekian A, Yudkowsky RDowning SM, Yudkowsky R. Assessment portfolios. Assessment in Health Professions Education. 2009 New York, NY Taylor and Francis

27. Altahawi F, Sisk B, Poloskey S, Hicks C, Dannefer EF. Student perspectives on assessment: Experience in a competency-based portfolio system. Med Teach. 2012;34:221–225

28. Friedman Ben-David M. The role of assessment in expanding professional horizons. Med Teach. 2000;22:472–477

29. Hicks PJ. President, Association of Pediatric Program Directors. Personal communication, August 1, 2012.

30. Batalden P, Leach D, Swing S, Dreyfus H, Dreyfus S. General competencies and accreditation in graduate medical education. Health Aff (Millwood). 2002;21:103–111

31. Epstein RM. Mindful practice. JAMA. 1999;282:833–839

32. Ericsson KA. Deliberate practice and the acquisition and maintenance of expert performance in medicine and related domains. Acad Med. 2004;79(10 suppl):S70–S81

33. Bereiter C, Scardemalia M Surpassing Ourselves: An Inquiry Into the Nature and Implications of Expertise. 1993 LaSalle, Ill Open Court

34. Sandars J. The use of reflection in medical education: AMEE guide no. 44. Med Teach. 2009;31:685–695

35. Ryan RM, Deci EL. Self-determination theory and the facilitation of intrinsic motivation, social development, and well-being. Am Psychol. 2000;55:68–78

36. Friedlander MJ, Andrews L, Armstrong EG, et al. What can medical education learn from the neurobiology of learning? Acad Med. 2011;86:415–420

37. Rohrer D, Pashler H. Recent research on human learning challenges conventional instructional strategies. Educ Res. 2010;39:406–412

38. Knowles M Self-Directed Learning: A Guide for Learners and Teachers. 1975 Parsippany, NJ Pearson Learning–Cambridge Adult Education

39. Eva KW, Regehr G. Self-assessment in thehealth professions: A reformulation andresearch agenda. Acad Med. 2005;80(10 suppl):S46–S54

40. Cruess SR, Cruess RL, Steinert Y. Role modelling—Making the most of a powerful teaching strategy. BMJ. 2008;336:718–721

41. Ten Cate TJ, Kusurkar RA, Williams GC. How self-determination theory can assist our understanding of the teaching and learning processes in medical education. AMEE guide no. 59. Med Teach. 2011;33:961–973

42. Sargeant J, Mann K, van der Vleuten C, Metsemakers J. “Directed” self-assessment: Practice and feedback within a social context. J Contin Educ Health Prof. 2008;28:47–54

43. Watling C, Driessen E, van der Vleuten CP, Vanstone M, Lingard L. Understanding responses to feedback: The potential and limitations of regulatory focus theory. Med Educ. 2012;46:593–603

44. Holmboe E, Ginsburg S, Bernabeo E. The rotational approach to medical education: Time to confront our assumptions? Med Educ. 2011;45:69–80

45. Hirsh D, Gaufberg E, Ogur B, et al. Educational outcomes of the Harvard Medical School–Cambridge integrated clerkship: A way forward for medical education. Acad Med. 2012;87:643–650

46. Asch DA, Nicholson S, Srinivas S, Herrin J, Epstein AJ. Evaluating obstetrical residency programs using patient outcomes. JAMA. 2009;302:1277–1283

47. Nasca TJ. The next step in the outcomes-based accreditation project. ACGME Bulletin. May 2008.:2–4

48. Nasca TJ, Philibert I, Brigham T, Flynn TC. The next GME accreditation system—Rationale and benefits. N Engl J Med. 2012;366:1051–1056

49. Pediatrics Milestone Working Group. . The Pediatric Milestone Project. https://www.abp.org/abpwebsite/publicat/milestones.pdf. Accessed April 22, 2013

50. Regehr G, Ginsburg S, Herold J, Hatala R, Eva K, Oulanova O. Using “standardized narratives” to explore new ways to represent faculty opinions of resident performance. Acad Med. 2012;87:419–427

51. Ginsburg S, McIlroy J, Oulanova O, Eva K, Regehr G. Toward authentic clinical evaluation: Pitfalls in the pursuit of competency. Acad Med. 2010;85:780–786

52. Ginsburg S, Gold W, Cavalcanti RB, Kurabi B, McDonald-Blumer H. Competencies “plus”: The nature of written comments on internal medicine residents’ evaluation forms. Acad Med. 2011;86(10 suppl):S30–S34

53. ten Cate O, Scheele F. Competency-based postgraduate training: Can we bridge the gap between theory and clinical practice? Acad Med. 2007;82:542–547

54. Jones MD Jr, Rosenberg AA, Gilhooly JT, Carraccio CL. Perspective: Competencies, outcomes, and controversy—Linking professional activities to competencies to improve resident education and practice. Acad Med. 2011;86:161–165

55. Carraccio C, Burke AE. Beyond competencies and milestones: Adding meaning through context. J Grad Med Educ. 2010;2:419–422

Cited By:

This article has been cited 2 time(s).

Teaching and Learning in Medicine
PBL and Beyond: Trends in Collaborative Learning
Pluta, WJ; Richards, BF; Mutnick, A
Teaching and Learning in Medicine, 25(): S9-S16.
10.1080/10401334.2013.842917
CrossRef
Academic Medicine
Integrating Competencies
Sklar, DP
Academic Medicine, 88(8): 1049-1051.
10.1097/ACM.0b013e31829ab5c8
PDF (230) | CrossRef
Back to Top | Article Outline

© 2013 by the Association of American Medical Colleges

Login

Article Tools

Images

Share