In 1999, the Accreditation Council for Graduate Medical Education (ACGME) and the American Board of Medical Specialties (ABMS) described six domains of clinical competency that were intended to organize the general dimensions of expertise of the modern physician. Over the ensuing years, the ABMS certifying boards framed expectations for initial certification (IC) and maintenance of certification (MOC) processes in the six competencies, and the ACGME required residency and fellowship programs to configure curricula and evaluation processes in the rubric of the six competencies under the umbrella of the Outcome Project.1 The implied goal of these tandem efforts was to establish desired outcomes of physician training and to maintain and enhance the level of physician performance over time in practice through IC and MOC. Subsequently, the Federation of State Medical Boards has introduced the concept of maintenance of licensure with similar goals that encompass all physicians, includ ing those who do not seek or achieve IC. In 2009, the ACGME, in tandem with the ABMS board, the college or academy, residency program directors, and residents in each specialty began the work of defining “Milestones,” the core elements of the six competencies and the trajectory of expected growth in these domains.
Our approach to the assessment of learners and training programs has been evolving since before the core competencies were introduced, and it will continue to do so as our ability to gather data and understand best practices in physician training improves. The next step in this process will be the ACGME’s Next Accreditation System (NAS), which we describe below, but we recognize that there are already developments in the assessment of medical education that will influence our approach in the future. We consider some of these innovations and discuss how they may shape the next accreditation system after the Next Accreditation System.
Coincident with the efforts of the Outcome Project and an increased focus on physician performance over time, the ACGME has redefined the accreditation process, moving to a model of continuous oversight of key parameters, while systematically permitting deviation from existing detailed process standards for accredited programs in order to facilitate local innovation. Of course, the most important goal of these changes is to improve trainee outcomes in the six competencies. The NAS2 is the culmination of the Outcome Project. With redesigned data acquisition systems and creation of a national database of program characteristics, performance parameters, and resident Milestone achievement, individual specialty residency review committees will oversee programs based on key parameters, which include resident educational outcomes. Programs will maintain accreditation based on compliance with key process elements and actual resident performance, rather than based solely on demonstration of compliance with process-related standards.
A pivotal dimension of the NAS is the Clinical Learning Environment Review3 (CLER). This innovative, nonaccreditation site visit program has its origins in the work of the 2008–2010 Resident Duty Hour Task Force. The task force set forth three expectations for the ACGME.4 First, it is foundational that sponsoring institutions assure the safety and quality of care delivered to patients by residents in the learning environment today. Second, the ACGME must expect that sponsoring institutions assure the safety and quality of care rendered by residency program graduates in their future practice. Finally, the ACGME must expect that sponsoring institutions assure that residents are educated in a fashion that is humane, and in which the values of professionalism and effacement of self-interest in favor of the needs of patients are modeled by the faculty and staff and demonstrated by the residents.
To ensure that we met these expectations with appropriate expertise, the ACGME empaneled a national advisory task force to provide a framework for the CLER visit program. Recognizing that there are numerous accreditors as well as local, state, regional, and federal evaluation programs scrutinizing the quality and safety of care rendered in hospitals, the task force recommended a nonaccreditation visit program to serve as an external assessment tool for sponsors of ACGME programs. Visits focus on six areas of concern: (1) the engagement and demonstration of meaningful participation of residents in the patient safety programs of the institution; (2) the engagement and demonstration of meaningful participation of residents in the institutional quality of care activities and participation in programs related to reduction of disparities in clinical care conducted by the institution; (3) the establishment and oversight of institutional supervision policies; (4) the effectiveness of institutional oversight of transitions of care; (5) the effectiveness of duty hours and fatigue mitigation policies; and (6) activities addressing the professionalism of the educational environment. The CLER visits provide each institution with an evaluation of their level of performance in each of these six areas and are designed to demonstrate trends over time. The goal of this program is to demonstrate continued growth in quality and safety of care rendered in U.S. teaching hospitals and the engagement and education of residents in these dimensions of physician practice in an environment that demonstrates and inculcates professionalism.
The CLER visit program is overseen by an evaluation committee (EC) convened by the ACGME, composed of individuals with expertise in patient safety, quality of care, systems engineering, and education, and who come from within and outside of medicine. The EC oversees the CLER visit evaluation processes and advises both administration and the ACGME board on matters related to the effectiveness and impact of the program. The CLER program plans on publishing summary information on the state of U.S. teaching hospitals in the six dimensions after each cycle of review.
The Accreditation System After the “Next Accreditation System”
The NAS became possible after new understanding emerged from the Outcome Project. Over the next decade, if these modifications and subsequent enhancements in the ACGME’s accreditation model incorporate additional data elements that prove to be beneficial in the accreditation process, the ACGME will likely ask once again whether our model of accreditation requires retooling, or even complete reconstruction. The timing and magnitude of revision of the NAS may depend on developments in two areas. First, there is emerging evidence that the quality of care delivered in a training program can affect the quality of care that the program’s graduates deliver later in practice. If the findings outlined by Asch and colleagues5 in this issue can be expanded beyond the narrow dimensions of a single entity of clinical care and can clearly demonstrate a relationship between the quality of care rendered in the teaching program and the immediate, intermediate, and long-term care rendered by graduates of residency programs, accreditation must incorporate scrutiny of those parameters into its processes, and levels of clinical performance into its standards. Indeed, the ACGME is proceeding from a perspective that the quality of care rendered in the teaching environment has an important influence over the clinical care effectiveness of our graduates.
Further, the work of Ericsson and colleagues6 on the development of expertise suggests that the growth of the trainee would be limited by the level of expertise of the mentor and, by extension, the quality of care provided by the mentor in the clinical context (i.e., resource constraint). To mitigate resource constraint, the ACGME requires that program sponsors be accredited by national health care accreditation bodies and that program faculty meet specific academic qualifications. The data from Asch and colleagues5 would suggest that there is a range of quality within that accredited group of sponsors that affects the incidence of complications in a well-described, frequently performed series of procedures by as much as 33%. Furthermore, they suggest that despite continued improvement in practice, a measurable difference in performance persists for as long as 15 years after graduation. Clearly, other constraints identified by Ericsson et al, such as motivational and effort constraints, require other parameters of evaluation. Although the associations reported in this series of work seem evident, and the causal pathways as yet need to be teased out, these data should give cause for all educators to evaluate the quality of care rendered in their teaching setting.
The second important area will be the impact of the CLER program on the quality and safety of care rendered in the teaching environment. It is our hypothesis that provision of external, expert-based evaluation of the six focus areas of the CLER program will result in active engagement of the graduate medical education community in the ongoing safety and quality efforts within each teaching institution. We believe that through this process of enhanced engagement, we will see substantial improvement in safety and quality systems, proficiency of graduates in patient safety and quality, and improvements in patient outcomes.
The confluence of these efforts—researching the impact of the quality of care rendered in the teaching environment on the quality of care rendered by graduates in practice, and the effectiveness of educational program sponsors in responding to the CLER program evaluations—will cause the ACGME to ask and have sufficient information to answer the following questions at some point in the future:
1. What are the educational characteristics of programs that produce trainees who provide quality patient care after graduation? Is the link between quality of patient care rendered in the teaching environment an association or causative?
2. What are the parameters of quality and safety of patient care that the ACGME should use to judge programs and sponsoring institutions?
3. What is the threshold level of quality and safety of patient care that a program and sponsoring institution must consistently demonstrate to be permitted to (continue to) educate residents and fellows?
4. What will be the impact of competency-based (performance-based) versus time-based training on the parameters used to judge programs and sponsors? Will we have educational outcome parameters that correlate with and predict practice performance after graduation? Will competency- and experience-based graduation criteria supplant time-based training and reduce the degree of variability in graduate performance described by Asch and colleagues?5
The accreditation system after the Next Accreditation System is likely to continue to require compliance with certain structural and resource-based standards. It will, however, inexorably progress from the model that NAS is replacing, which is a model that requires demonstration of excellence in research, excellence in faculty credentials, excellence in facilities, excellence in academic parameters, and compliance with a litany of process-related standards. NAS is the first step towards a model that requires excellence in clinical care, excellence in mentorship, an environment that demonstrates the ability to spawn graduates who seek to continuously improve patient care and outcomes, and one that accomplishes the above in an environment that nurtures and inculcates professionalism in its faculty, staff, and graduates. The ACGME will evolve using evidence garnered from research such as that of Asch and colleagues and others in the educational and clinical care arena that demonstrate educational program impact on quality and safety of patient care, not only in the immediate teaching setting but also on the quality and safety of patient care rendered in clinical practice after graduation. It requires research conducted by the ACGME with its community of collaborating investigators, including the ABMS member boards, the colleges and academies, the program directors, and the residents, who will critically review Milestones and other parameters used to evaluate and advance the educational outcomes effort. It will also require collaboration with health service researchers, those engaged in safety and quality research, and the institutional sponsors of our residency programs. We have clearly entered the Era of Outcomes, the beginning of the final phase of the Outcome Project. The graduate medical education community, through the NAS, is making progress on the measurement of educational outcomes in our residents and fellows. The educational community must now forge that essential linkage between improved performance of the clinical educational environment and its educational outcomes, and the clinical practice outcomes of our graduates over their career in practice. Our graduates and their patients deserve no less than our best efforts.