Just over 100 years ago, Abraham Flexner's1 seminal report, Medical Education in the United States and Canada, sparked widespread reform, and now medical education is once again experiencing significant pressure to transform. Multiple reports from many of medicine's specialty groups and external stakeholders highlight the inadequacies of current training models to prepare a physician workforce to meet the needs of an increasingly diverse and aging population across the globe.2–9 Educators and regulatory bodies are responding to these calls for transformation by focusing on competency-based medical education (CBME), an amalgam of educational theories and approaches that emphasize the outcomes of training.10–12 CBME was recently defined by a group of international collaborators as
an outcomes-based approach to the design, implementation, assessment, and evaluation of a medical education program using an organizing framework of competencies. In CBME, the unit of progression is mastery of specific knowledge, skills, and attitudes and is learner-centered.13
One of the first competency-based frameworks to be introduced was CanMEDS in the mid-1990s.14 The Accreditation Council for Graduate Medical Education followed with the development and introduction of the general competencies framework for residency and fellowship in 2001.15 More recently, the Association of American Medical Colleges has strengthened its emphasis on competencies and outcomes for medical students,16 and the United States Medical Licensing Examination will increasingly emphasize physician competencies.17 Other countries, looking to improve the quality of training and potentially reduce costs, are also working to implement CBME.18
Although there is widespread agreement about the need for competencies that go beyond more traditional competencies, such as clinical skills and knowledge, some have expressed skepticism about the ability of training programs to perform reliably and validly the comprehensive assessments required by a CBME approach.19,20 For example, limited assessment methods and tools currently exist for teamwork and care coordination, key subcompetencies of systems-based practice. CBME, because it is driven by complex situational and context-dependent outcomes, requires robust assessment and evaluation processes to determine whether a trainee is truly prepared to enter the next stage of his or her career. As a result, since the inception of CBME, medical educators have been seeking the holy grail of evaluation tools. Methods such as secure examinations, standardized patients, and procedural simulations have contributed substantially to reliable and valid trainee assessment. For example, higher performance on secure examinations is modestly associated with better clinical performance in practice after completing a graduate medical education training program.21 Standardized patients have become an integral part of medical student education and assessment and are increasingly used in residency programs to judge capability in a controlled setting across a multitude of clinical skills.21–23
However, these methods and tools cannot replace the importance of faculty who are enabled to critically observe, question, and judge trainee performance in actual patient care situations.24 Ensuring that a trainee's capability or competence, as measured by exams and standardized patients, translates, or “transfers,” into actual work-based performance with patients and families is an essential faculty responsibility.25 Because of its emphasis on developmental trajectories, CBME requires more frequent, timely, formative, and authentic assessment and less dependence on “proxy,” summative assessments.10
This perspective is supported by evidence from work in the development of expertise and the perils of isolated self-assessment. For example, exclusively using standardized patients to judge whether a trainee was acquiring competence in clinical skills would not only be expensive but, more important, would not provide the learner with regular and ongoing feedback; direct observation of trainees with timely feedback by faculty is essential. The journey to expertise also requires continuous practice under the critical eyes and ears of faculty who must accurately assess how trainees are progressing with frequent and timely feedback.26,27 Furthermore, a substantial body of literature clearly demonstrates that most physicians cannot determine their own strengths and weaknesses without external data and feedback.28 Effective assessment by faculty is a critical aspect of the equation in the transformation to CBME.
Faculty as Evaluators: Challenges and Opportunities
The fractured learning environment
At present, medical faculty work with trainees primarily in clinical units, referred to by some as microsystems, such as an ambulatory clinic or office-based setting, a hospital ward, a surgical suite, an intensive care unit, or other such sites.29 These clinical units are the context for work-based training and assessment. We are now beginning to understand how professional development and assessment are influenced by the functionality of the clinical units where students, residents, and fellows learn and care for patients.29 Research has identified that effective, successful microsystems are characterized in part by a strong focus on patients, interdependence of staff, staff development, and the generation of performance results. Embedded in these success characteristics is the need for a high level of professionalism, especially among physician leaders. Several recent reports demonstrate that internal medicine residency clinics scoring highly on a systems assessment tool and having electronic medical records are still not using basic quality improvement interventions or providing optimal care.30,31 It is hard to conceive that trainees can effectively acquire competency in clinical care, quality improvement, or systems-based practice if they practice in poorly functioning clinical microsystems.
In the inpatient setting, too many faculty are transients in the very clinical units where they teach and assess. For example, faculty in internal medicine and pediatrics often rotate on inpatient clinical services for just two to four weeks. This rotational structure is deeply ingrained within these specialty training cultures, yet we know little about how rotating through these microsystems affects the faculty's ability to accurately assess competence of their learners.32 In other fields, such as surgery or anesthesiology, residents often encounter pressure to maximize operational efficiency in the unit. Residents may face multiple operating room schedule changes and ultimately may anesthetize or operate on patients they did not originally evaluate for the procedure. These circumstances may or may not be known to the faculty responsible for assessing the learners. A recent study found that supervising faculty anesthesiologists had significantly different and variable conceptions compared with residents about when the residents should be allowed to perform six critical entrustable professional activities independently; acquaintance with the trainee was a key factor that affected this decision.33
This lack of continuity in both patient experience and time with faculty for trainees in the current medical education system makes longitudinal assessment and feedback very difficult. Hirsh and colleagues34 argued for the importance of continuity as an “organizing principle” for medical education, and a recent review on key attributes of effective supervision highlighted the importance of meaningful relations.35 Compounding the fractured learning environment and lack of continuity is the substantial reluctance on the part of faculty to “feed forward” information to their colleagues about trainees over fear of “biasing” the receiving faculty.36,37 However, the end result is a perpetual cycle of “starting over” with assessment instead of using the shared information for the trainee's development and creation of meaningful action plans. These cultural issues around supervision and feedback must be addressed by the educational community.37,38
System factors influence trainee performance and faculty members' need to account, and sometimes “adjust,” for these system factors. Such adjustment might lead to rating errors, such as halo effect and leniency error, because the faculty may feel the trainee was disadvantaged by a dysfunctional microsystem, especially if the microsystem “parasitizes” trainees, assigning them menial or undesirable tasks, often at the expense of educational experiences. Conversely, faculty may blame a trainee for an error when in fact the primary cause was a system problem. Teasing out the factors that lead to adverse events, for example, can be difficult unless systematic methods, such as root cause analysis, are used.39 Few faculty are trained to use these skills.
Learning to work in interdisciplinary teams and understanding how the systems of the clinical unit function are also vital to the quality of patient care, teaching, and assessment. Unless interdisciplinary team care is the norm of a practice setting, it is hard to imagine how spending only two to four weeks supervising trainees is sufficient time for faculty to assess how well the trainees are interacting with the other essential health care providers on the unit. Working in interdisciplinary teams also calls for a more complex, contextually rich conception of professionalism. Hafferty and Levinson40,41 have explicitly called for the incorporation of complex adaptive system thinking when teaching and evaluating professionalism. To do this, faculty must understand the science of systems and how to work effectively in interdisciplinary teams, and they must move away from traditional views to a more relational view of autonomy. Relational autonomy recognizes that human agents are interconnected and interdependent, meaning that autonomy is socially constructed and must be granted by others.42,43
Furthermore, adhering to a systems approach assumes the faculty themselves have a good understanding of how the clinical unit functions and have the skill necessary to effectively assess the system and the essential roles of other health care providers on a team. Combining faculty who have insufficient system understanding with dysfunctional clinical units can only exacerbate the problem of flawed assessment and contribute to the potentially deleterious effects of the hidden curriculum.42 Future faculty development will need to incorporate training about how system factors affect the quality of both teaching and patient care, and also how faculty must be prepared to assess their trainees' competencies in systems-based practice.
For several reasons, the outpatient setting holds potentially more promise than inpatient settings for longitudinal assessment and feedback for most specialties.44 First, many trainees in specialties such as internal medicine, family medicine, and pediatrics work with a stable group of faculty preceptors who can observe these trainees over time.24 Second, because trainees often have their own panel of patients, assessment methods such as a medical record audit can be combined with reflection guided by faculty.45 Finally, as so much of medicine has moved into the outpatient setting, it follows logically that more training and assessment should occur here as well.
Traditional assessment roles
For the foreseeable future, two traditional faculty roles in assessment will continue to be essential: (1) questioning to probe knowledge and clinical reasoning and (2) direct observation to judge the clinical skills of medical interviewing, physical examinations, counseling, and other communication skills as well as procedural skills. Questions are crucial for helping trainees to learn the core skill of clinical reasoning. Unfortunately, faculty often fail to explore the logic and rationale behind trainee decisions.46 Faculty need to develop the skills to ask questions that emphasize the reasoning process and incorporate key findings and lessons from a growing body of evidence from research on cognition.46,47 Practical approaches exist to help faculty acquire these skills.46,48 These questioning skills apply equally well to the evaluation of procedural skills.
Although faculty need to be critical and accurate observers of trainee performance, limited published research demonstrates that faculty frequently fail to identify deficiencies in trainees' clinical skills.24,49–51 Ironically, despite the central role of faculty in teaching and assessment, only one study to date has demonstrated any efficacy of faculty development in improving the quality of faculty ratings of trainees based on direct observation.52 Part of the reason for this state of affairs is medical education's overemphasis on finding the “perfect” evaluation tool instead of focusing on the more important issue—the faculty who use the tool.53,54 To be sure, faculty should only use tools that have been evaluated for basic psychometric and quality properties, and a recent systematic review identified a small group of observation tools that meet minimal quality criteria for use.55 However, given that the redesign of evaluation forms only explains up to 10% of the variance in ratings,56 medical educators must now shift their attention to developing more effective methods to train faculty in observation and assessment.
In addition, we must help faculty and programs move away from rating scales based on just numbers, as CBME will require a greater reliance on descriptive or “qualitative” assessment.57 Early work using qualitative research methods to judge medical student portfolios is as reliable as quantitative methods.58 Faculty need to recognize that numeric ratings are nothing more than a process to synthesize and then represent a composite judgment about a trainee. Ultimately, evaluation tools are only as good as the individuals using them; perhaps it is time for the medical education profession to require all faculty involved in training students and residents to learn a core set of competencies in assessment, and for all training programs to provide ongoing professional development in assessment.59
Along those lines, recent work by Albanese and colleagues60 provides a useful framework about how the educational community and institutions might structure faculty development activities using an integrated systems model (ISM). They lay out 14 implications of the ISM for continuing medical education. With minor adjustments, some of these can be equally applied to faculty development, for example:
* Changes in assessment and supervision that are also mission critical for the institution and help to build system “reserve” will be more likely implemented.
* The further a faculty member moves along the stages of change, the higher the likelihood of adoption that can also produce individuals more likely to become champions for the change.
* Enlisting the assistance of respected educational faculty to help implement the change helps to promote broader and more rapid uptake by other faculty.
* Helping faculty mentally picture how the change in the educational program will affect and improve their own educational practices will also assist in the adoption of new knowledge and skills.
These and other factors provided in the article can serve as a useful guide to educators planning faculty development activities.61
Assessment by faculty must be grounded in the principles of CBME
CBME requires assessment be criterion based and developmental. Defining the criteria in developmental terms, commonly called milestones or benchmarks, allows faculty and program directors to determine whether the trainee is on an appropriate “trajectory.”62 Evolving toward such a developmental, criterion-based standard will require training to help faculty acquire shared mental models and understanding of what competence should look like at various developmental stages. Milestones, in effect, can become the blueprint for curriculum and assessment.62
Multiple studies highlight that one of our biggest and most refractory problems in assessment is the lack of agreement among faculty about what constitutes satisfactory performance across competencies regardless of the competency framework.20,54 This lack of agreement among faculty is a major threat to the reliability and validity of decisions about trainee competence.54,56 In addition, it places an unfair burden on trainees to make sense of the disparate ratings and feedback they receive from faculty. Too often, the assessment process can feel to the trainee like playing the lottery—“Who will I get today and what will they say?” Because effective assessment is not an innate skill but, rather, requires training and practice, programs must provide ongoing feedback to faculty regarding their evaluation skills. Ideally, this feedback would provide comparisons with the skills of their peers within the program, and ultimately it would also provide comparisons with national benchmarks.63 Programs also must develop longitudinal assessment systems to counter the pernicious effects of the current fractured learning environment highlighted previously. Ultimately, faculty must become less fearful of providing meaningful performance data—including strengths and developmental needs—about the trainee during educational handoffs.36,37 This is especially important in our current rotational model of training—without “forward feeding” of information, trainees may end up in a perpetual cycle of superficial, nonspecific assessment and feedback.
The good news is that a number of organizations are aggressively supporting a national effort to define milestones across all the disciplines in medicine, and likewise a consortium of organizations has defined core competencies in geriatrics for medical students and residents.7 The next crucial step will be to implement and apply the milestones in training programs, a process that will require a substantial effort in faculty development using techniques such as performance dimension training and frame-of-reference training.54,64 These approaches have been shown in other fields to improve the quality of performance appraisals.65 More important, frame-of-reference training has been successfully used as part of an internal medicine student clerkship system for many years at the Uniformed Services University of the Health Sciences and now nationally.66,67
Assessment requires competent faculty
Clinical competence of faculty is a crucial component of effective assessment, yet this issue has received little attention to date. Programs operate on the assumption that faculty possess sufficient, if not high, levels of knowledge, skills, and attitudes in the competencies they are responsible for teaching and assessing. We have known for some time that numbers of students and residents graduate with significant deficiencies in clinical skills,24 so it might not be surprising that those who later become faculty may possess important deficiencies in clinical skills. A growing body of literature supports this concern. For example, a study of cardiac auscultation skills found that faculty were no more skilled than third-year medical students.68 Another study highlighted substantial deficiencies in informed decision-making skills among family medicine physicians, internists, and surgeons,69 and a recent study found that, compared with residency clinics, practicing physicians provided only marginally better care to older patients in a number of areas.30
The implication of these findings is that CBME-focused faculty development will need to incorporate clinical skills training with training in assessment. In addition to improving the clinical skills of faculty, faculty development will also need to incorporate training in the “new” competencies crucial to 21st-century practice: evidence-based practice using point-of-care clinical decision support and information; health information technology; teamwork; care coordination; systems functionalities; advocacy; and context-aware professionalism, to name a few. The majority of faculty working today never received formal training in any of these competencies.29 In effect, there are a number of new competencies that faculty will need to learn as their trainees learn them, necessitating more collaborative models of faculty training. The Residency Review Committee for internal medicine recently added a requirement for core faculty to be the “expert competency evaluators … to assist in developing and implementing the evaluation system.”70
This is not to say that a single faculty member need be an expert in all competencies; rather, trainees should be taught and evaluated by those individuals that truly possess the highest level of knowledge and skill in the domain of interest, and those individuals may not be physicians. Furthermore, some individuals may be excellent judges of competence, yet they may not necessarily be experts in the field. One excellent example in medicine is standardized patients, who can be trained to judge performance effectively in key clinical skills.22
Faculty as coach and mentor in assessment
Ultimately, the majority of trainees will graduate from their programs and enter unsupervised practice. From that point forward, trainees can no longer rely on structured approaches to assessment from others; they will need to develop their own systems of self-directed assessment to continue their professional development and, at a minimum, remain competent. Faculty must prepare trainees for this important inevitability. Portfolios are a potentially powerful tool for engaging trainees in their own assessment.71 Building a portfolio is an active process that requires contributions from the trainee, and self-assessments like medical record audits can be performed directly by the trainee.45 Lack of engagement by individual trainees in their own assessment will substantially undermine a widespread transformation to CBME but, more important, will inadequately prepare trainees for a practice environment looking to measure physician performance continuously. One clear implication is the need for trainees to fully understand the value and impact of the assessment methods and tools being used by their training programs.
Next Steps: Preparing Faculty for the CBME Era
There is a growing consensus that the rate-limiting step in the evolution to CBME is faculty development.72 As we have highlighted in this article, faculty will need substantial help in improving both their core competencies as well as new ones in teaching and in assessment. Most learning still occurs through the care of actual patients in a variety of clinical settings, and although we will need to increasingly embrace simulation and other assessment technologies in the future, faculty will remain central to the education process. If we are to transform medical education for the good of the public, faculty must also fully embrace their role as evaluators. The role of faculty as expert “coaches” must encompass teaching, assessment, and feedback.
Significant challenges and barriers to this evolution do exist. First, the available time for faculty to learn and practice new skills has been shrinking as pressures for productivity in clinical care and/or research have grown substantially. This is frustrating not only for faculty but also increasingly for policy makers who believe that taxpayers are not getting a meaningful return on their more than $15 billion investment in graduate medical education.73,74 Furthermore, ethical standards of our profession would direct us to ensure that our students possess sufficient knowledge, skills, and attitudes for successful matriculation into residency. The bottom line is that institutions must provide the resources necessary to ensure at least a competent educational workforce. It is no longer acceptable to perform education as a “one-off” activity that is inadequately supported.72,73
We have yet to develop the most effective faculty development models. The good news from a recent systematic review is that the faculty who participate in educational training activities report (1) high levels of satisfaction, (2) positive changes in their attitudes, (3) increased understanding of educational principles and teaching skills, (4) changes in behavior as noted by their students, and (5) greater involvement in teaching.75 This study also noted that success factors for faculty development include incorporation of feedback in the training, active learning, effective relationships with peers and colleagues, and use of diverse teaching approaches. However, few studies have investigated whether faculty training translates into actual behavior changes among trainees. In addition, most faculty development is designed as a one-time “bolus” activity and less often as a longitudinal designed program.
We will not address the current shortcomings of both undergraduate and graduate medical education faculty development using single-institution-based programs and one-time workshops. A national faculty development effort in assessment and CBME using new models of longitudinal, experiential training is needed. In Table 1 we provide a summary of what we believe are the critical next steps.
There is a need now to create regional centers to develop a national cadre of trainers, a sort of “SWAT team,” who can provide longitudinal training and on-site coaching. These centers could function using existing resources such as simulation labs at medical schools addressing the key items listed and create networks of expertise that extended well beyond an individual school's or program's boundaries. Financial resources could come from the redirection of a portion of current federal graduate medical education dollars, the Human Resources Service Administration, and pooling of local institutional resources.73 By creating regional centers, economies of scale can be realized with the added benefit of faculty from multiple programs interacting to create a shared understanding of the competencies and milestones, reducing the unwarranted variation in assessment currently seen across the country.
We should not wait for research to find the perfect faculty development models before embarking on this initiative. Instead, we must build in ongoing research and learning as part of the process, using new methodological strategies to evaluate the effectiveness of faculty development as part of a continuous quality improvement process.76 We know enough about general principles and educational theory to build and implement faculty development in assessment to move CBME forward and improve training for the benefit of the public. The public, patients, and our trainees need for the medical education enterprise to make this transition now.
This work was supported by a writing conference funded by the Medallion Fund and the Josiah Macy, Jr. Foundation. The conference was entitled “A 2020 Vision of Faculty Development Across the Medical Education Continuum” and was held at Baylor College of Medicine on February 26–28, 2010.
Dr. Holmboe co-leads a faculty development course in assessment conducted at the American Board of Internal Medicine; he receives no additional compensation for the course. He receives royalties from Mosby-Elsevier for a textbook on assessment. Finally, he received an honorarium from Baylor School of Medicine for a presentation at a symposium related to this manuscript.
This information was presented in part at the conference mentioned above.
1 Flexner A. Medical Education in the United States and Canada. A Report to the Carnegie Foundation for the Advancement of Teaching. Bulletin No. 4. Boston, Mass: Updyke; 1910.
2 Hoover EL. A century after Flexner: The need for reform in medical education from college and medical school through residency training. J Natl Med Assoc. 2005;97:1232–1239.
4 Holmboe ES, Bowen JL, Green ML, et al. Reforming internal medicine residency training. J Gen Intern Med. 2005;20:1165–1172.
5 Schroeder SA, Sox HC. Internal medicine training: Putt or get off the green. Ann Intern Med. 2006;144:938–939.
6 Institute of Medicine. Resident Duty Hours: Enhancing Sleep, Supervision, and Safety. Washington, DC: National Academy Press; 2009.
9 Simpson JG, Furnace J, Crosby J, et al. The Scottish doctor—Learning outcomes for the medical undergraduate in Scotland: A foundation for competent and reflective practitioners. Med Teach. 2002;24:136–143.
11 Smith SR, Dollase R. AMEE Guide No. 14: Outcomes-based education: Part 2—Planning, implementing and evaluating a competency-based curriculum. Med Teach. 1999;21:15–22.
12 Hodge S. The origins of competency-based training. Aust J Adult Learn. 2007;47:179–209.
13 Frank JR, Mungroo R, Ahmad Y, Wang M, De Rossi S, Horsley T. Toward a definition of competency-based education in medicine: A systematic review of published definitions. Med Teach. 2010;32:631–637.
14 Frank JR, Danoff D. The CanMEDS initiative: Implementing an outcomes-based framework of physician competencies. Med Teach. 2007;29:642–647.
15 Batalden P, Leach D, Swing S, Dreyfus H, Dreyfus S. General competencies and accreditation in graduate medical education. Health Aff (Millwood). 2002;21:103–111.
22 Hawkins RE, Boulet JR. Direct observation: Standardized patients. In: Holmboe ES, Hawkins RE, eds. Practical Guide to the Evaluation of Clinical Competence. Philadelphia, Pa: Mosby-Elsevier; 2008:102–118.
23 Cleland JA, Abe K, Rethans JJ. The use of simulated patients in medical education: AMEE Guide No. 42. Med Teach. 2009;31:477–486.
26 Ericsson KA. An expert-performance perspective of research on medical expertise: The study of clinical performance. Med Educ. 2007;41:1124–1130.
27 Ericsson KA. The influence of expertise and deliberate practice on the development of superior expert performance. In: Ericsson KA, Charness N, Feltovich P, Hoffman RR, eds. Cambridge Handbook of Expertise and Expert Performance. Cambridge, UK: Cambridge University Press; 2006:685–706.
28 Eva KW, Regehr G. “I'll never play professional football” and other fallacies of self-assessment. J Contin Educ Health Prof. 2008;28:14–19.
29 Nelson EC, Batalden PB, Godfrey MM. Quality by Design: A Clinical Microsystems Approach. San Francisco, Calif: Jossey-Bass; 2007.
32 Holmboe E, Ginsburg S, Bernabeo E. The rotational approach to medical education: Time to confront our assumptions? Med Educ. 2011;45:69–80.
34 Hirsh DA, Ogur B, Thibault GE, Cox M. “Continuity” as an organizing principle for clinical educational reform. N Engl J Med. 2007;356:858–866.
35 Kilminster S, Cottrell D, Grant J, Jolly B. AMEE Guide No. 27: Effective educational and clinical supervision. Med Teach. 2007;29:2–19.
39 Graber ML, Franklin N, Gordon R. Diagnostic error in internal medicine. Arch Intern Med. 2005;165:1493–1499.
40 Hafferty FW, Levinson D. Moving beyond nostalgia and motives: Towards a complexity science view of medical professionalism. Perspect Biol Med. 2008;51:599–615.
42 MacDonald C. Nurse autonomy as relational. Nurs Ethics. 2002;9:194–201.
43 Sherwin S. A relational approach to autonomy in health care. In: Sherwin S, ed. The Politics of Women's Health: Exploring Agency and Autonomy. Philadelphia, Pa: Temple University Press; 1998:19–47.
44 Bowen JL, Salerno SM, Chamberlain JK, Eckstrom E, Chen HL, Brandenburg S. Changing habits of practice. Transforming internal medicine residency education in ambulatory settings. J Gen Intern Med. 2005;20:1181–1187.
46 Bowen JL. Educational strategies to promote clinical diagnostic reasoning. N Engl J Med. 2006;355:2217–2224.
47 Gruppen LD, Frohna AZ. Clinical reasoning. In: Norman GR, van der Vleuten CPM, Newble DI, eds. International Handbook of Research in Medical Education. Dordrecht, Netherlands: Kluwer Academic; 2002:205–230.
49 Herbers JE, Noel GL, Cooper GS. How accurate are faculty evaluations of clinical competence? J Gen Intern Med. 1989;4:202–208.
50 Kalet A, Ear JA, Kilowatts V. How well do faculty evaluate the interviewing skills of medical students? J Gen Intern Med. 1992;97:179–184.
51 Noel GL, Herbers JE Jr, Caplow MP, Cooper GS, Pangaro LN, Harvey J. How well do internal medicine faculty members evaluate the clinical skills of residents? Ann Intern Med. 1992;117:757–765.
52 Holmboe ES, Hawkins RE, Huot SJ. Direct observation of competence training: A randomized controlled trial. Ann Intern Med. 2004;140:874–881.
53 Landy FJ, Farr JL. Performance rating. Psychol Bull. 1980;87:72–107.
54 Holmboe ES. Direct observation by faculty. In: Holmboe ES, Hawkins RE, eds. Practical Guide to the Evaluation of Clinical Competence. Philadelphia, Pa: Mosby-Elsevier; 2008:110–129.
55 Kogan JR, Holmboe ES, Hauer KE. Tools for direct observation and assessment of clinical skills of medical trainees: A systematic review. JAMA. 2009;302:1316–1326.
56 Williams RG, Klamen DA, McGaghie WC. Cognitive, social and environmental sources of bias in clinical performance settings. Teach Learn Med. 2003;15:270–292.
57 Govaerts MJ, van der Vleuten CP, Schuwirth LW, Muijtjens AM. Broadening perspectives on clinical performance assessment: Rethinking the nature of in-training assessment. Adv Health Sci Educ Theory Pract. 2007;12:239–260.
58 Driessen E, van der Vleuten C, Schuwirth L, van Tartwijk J, Vermunt J. The use of qualitative research criteria for portfolio assessment as an alternative to reliability evaluation: A case study. Med Educ. 2005;39:214–220.
62 Green ML, Aagaard EM, Caverzagie KJ, et al. Charting the road to competence: Developmental milestones for internal medicine residency training. J Grad Med Educ. 2009;1:5–20.
63 Swing SR, Clyman SG, Holmboe E, Williams RG. Advancing resident assessment in graduate medical education. J Grad Med Educ. 2009;1:278–286.
64 Goodstone MS, Lopez FE. The frame of reference approach as a solution to an assessment center dilemma. Consult Psychol J Pract Res. 2001;53:96–107.
65 Hauenstein NMA. Training raters to increase accuracy of appraisals and the usefulness of feedback. In: Smither JW, ed. Performance Appraisal. San Francisco, Calif: Jossey-Bass; 1998:404–442.
67 Hemmer PA, Papp KK, Mechaber AJ, Durning SJ. Evaluation, grading, and use of the RIME vocabulary on internal medicine clerkships: Results of a national survey and comparison to other clinical clerkships. Teach Learn Med. 2008;20:118–126.
68 Vukanovic-Criley JM, Criley S, Warde CM, et al. Competency in cardiac examination skills in medical students, trainees, physicians and faculty: a multicenter trial. Arch Intern Med. 2006;166:610–616.
69 Braddock CH 3rd, Edwards KA, Hasenberg NM, Laidley TL, Levinson W. Informed decision making in outpatient practice: Time to get back to basics. JAMA. 1999;282:2313–2320.
71 Holmboe ES, Davis MH, Carraccio C. Portfolios. In: Holmboe ES, Hawkins RE, eds. Practical Guide to the Evaluation of Clinical Competence. Philadelphia, Pa: Mosby-Elsevier; 2008:86–101.
73 Medicare Payment Advisory Commission. Medical education in the United States: Supporting long-term delivery system reforms. In: Report to the Congress: Improving Incentives in the Medicare Program. Washingon, DC: Medicare Payment Advisory Commission; June 2009: 3–39. http://www.medpac.gov/documents/Jun09_EntireReport.pdf
. Accessed December 13, 2010.
74 Iglehart JK. Medicare, graduate medical education and new policy directions. N Engl J Med. 2008;359:643–650.
75 Steinert Y, Mann K, Centeno A, Dolmans D, Spencer J, Gelula M, Prideux D. A systematic review of faculty development initiatives designed to improve teaching effectiveness in medical education: BEME Guide No. 8. Med Teach. 2006;28:497–526.
76 O'Sullivan PS, Irby DM. Reframing research on faculty development. Acad Med. 2011;86:421–428.