The recent health care quality improvement (QI) movement has called for significant changes to the way that health care is delivered and taught in academic medical centers (AMCs).1,2 This movement also has affected academic continuing medical education (CME). Bennett and colleagues3 called on CME professionals, specifically those in academic settings, to facilitate physicians’ learning systematically by developing competencies from their own practices to support performance-based CME. In 2005, the Accreditation Council for Continuing Medical Education (ACCME) released its Updated Criteria for Accreditation, which emphasized an expanded role for CME in physicians’ lifelong learning, self-assessment, and practice performance improvement.4 This focus correlated to the American Board of Medical Specialties’ requirements for maintenance of certification.5 Both organizations promoted the shift from an emphasis on the acquisition of medical knowledge to one on ongoing self- and performance assessment and practice improvement. Further, CME research over the past two decades has shown that lectures, a major component of traditional continuing education, are ineffective in changing physicians’ behavior.6 Instead, other formats, such as audit and feedback and reminders, which have been shown to be more effective, correspond to today’s performance improvement methods of clinical data collecting and reporting.7,8
Academic CME providers have responded to these influences by refocusing their efforts on internal educational activities for clinical faculty and housestaff, including regularly scheduled series such as departmentally sponsored grand rounds and morbidity and mortality conferences.9 Additionally, a decline in commercial support for academic CME has led to more internal funding.10
Despite these changes, barriers remain to the alignment of CME and the clinical quality enterprises within AMCs. These barriers include the following: (1) the clinical faculty who are responsible for continuing education are often not the same people as those who are involved in QI efforts, (2) CME offices may be struggling with financial solvency, (3) CME staff may not have the skill set or exposure necessary for clinical quality data collection and analysis, and (4) data collection may not identify specific practice gaps or be available to the CME office for educational needs assessment and outcomes evaluation. Finally, there is a perceived gap between the goals of education and those of QI in AMCs, with CME traditionally focused on knowledge transfer while QI has emphasized improving systems-based processes.
An Initiative for Change: The Aligning and Educating for Quality Initiative
To address this gap and meet the need for the alignment of the CME and QI efforts at AMCs, the Association of American Medical Colleges (AAMC) launched a pilot initiative in January 2011 called Aligning and Educating for Quality (ae4Q). Our goal with this initiative was to assist selected AMCs as they moved to a more integrated model of continuous performance improvement by aligning their quality measurement and improvement with their continuing education endeavors. The ae4Q initiative aligned data sources (financial, quality measurement, referrals, utilization, clinical effectiveness research, and other data) with effective educational and QI programming. In this article, we describe the development of the ae4Q pilot initiative and the resulting outcomes that have led to ongoing project improvements.
Features of the ae4Q pilot initiative
Two of the authors (N.L.D. and D.A.D.) designed the ae4Q initiative. On the basis of our experience in lifelong learning and improvement, we used the concepts of performance measurement, data analysis for needs assessment, and educational development to promote changes in practice. Our model had its foundation in CME but relied on collaboration with the health system’s QI enterprise. These concepts supported the AAMC’s strategic impact goals to work with and for member institutions in the areas of medical education, clinical care delivery, and, specifically, building member capacity. The AAMC provided full-time staff, administrative resources, and MedEdPORTAL and other Web-based resources to support the aeQ4 initiative.
We conducted a pilot from January 2011 through June 2012. First, we sent introductory information about the project to all AAMC member AMCs, including 137 medical schools and over 200 teaching hospital CME units. Fifteen sites volunteered to be considered, and we selected 12 on the basis of their readiness for change, the presence of local champions in CME and QI, the availability of resources for alignment, the identification of clinical improvement priorities, and a willingness to report their findings to the AAMC. Early on, 1 site dropped out because of staffing changes, leaving us with 11 sites for our pilot (see List 1). We executed a letter of understanding with each pilot site defining our expectations of each organization, confidentiality, limitations of data sharing, and scope of the project.
At each site, the ae4Q pilot included the following elements: (1) the key representatives’ (one or more per site) completion of a readiness assessment survey (see List 2); (2) a previsit telephone call with us to assess the goals and priorities of the site; (3) our on-site consultation visit; (4) consideration of recommendations for interventions for improvement; (5) our postvisit telephone debriefing with lead staff; (6) completion of a postintervention assessment survey (a repeat of the readiness assessment survey); (7) site-specific progress reports; (8) completion of a logic model11 demonstrating the participating organization’s resources required, short- and long-term outcomes, and barriers to change; and (9) a final summary of activity (see Figure 1).
During our site visits, stakeholders from across the organization participated in a number of activities. Although the AMC’s CME office was our entry point for the pilot, we expected the health system’s QI and other relevant leaders to be engaged and supportive of the initiative and the changes that we recommended. Site visit meetings included representatives from the CME office at the associate/assistant dean and/or director level as well as from the QI and graduate medical education leaders. Whenever possible, we met with the medical school dean and chief medical officer and/or chief executive officer of the teaching hospital system. Meeting agendas included a description of the initiative and its purpose as well as a discussion regarding each site’s proposed work. Our expectation was that this work would include efforts to align QI and CME with the goal of improving health care delivery.
In addition, we conducted workshops for the faculty and staff who develop grand rounds and morbidity and mortality conferences. We invited QI and CME professionals to participate to foster collaboration across these groups. Our goal for these workshops was to reframe such conferences to focus on organizational improvement priorities. Ideally, participants would use these new “quality-based rounds” to employ clinical performance data for needs assessments, identify and seek consensus on interventions for improvement, and assess postintervention outcomes data.
Next, to create a community of learners, we convened meetings with the ae4Q participants at national conferences. For example, a session at the 2011 AAMC annual meeting included presentations of preliminary results from two pilot sites along with a tutorial on the logic model as a method for outcomes evaluation.
To facilitate the integration of QI and effective educational interventions, we created a Web site (www.aamc.org/ae4q). On the basis of the pilot sites’ clinical improvement priorities, we created “content bundles” that included resources from the relevant literature and education and QI tools in specific clinical areas (e.g., venous thromboembolism [VTE] prophylaxis, hospital readmission, sepsis, surgical checklists, and others). Other faculty and staff development tools also are available on the site. Further, we provided guidance and consultation for the pilot sites to mentor individuals and groups within their academic health systems as they developed learning organization functions.12 We also collated, developed, and disseminated educational resources to the participating AMCs.
Evaluating the ae4Q pilot initiative
To evaluate our pilot, we used postintervention surveys, follow-up interviews with staff at the participating AMCs, activity update reports from each site, and reviews of the logic models submitted by the sites.
Preintervention readiness assessment survey results.
The results of the preintervention survey showed wide variation among the pilot sites’ organizational structures, including medical school/hospital relationships, representation of QI expertise on CME committees, and the presence of efforts to align health care delivery with community, system, or patient needs. We found less variation in educational methods and formats; most sites used mainly didactic curricula in conference settings, delivering primarily clinical content to physician audiences. Sites reported that clinical data were moderately available and used primarily by the QI enterprise as opposed to informing medical education programs. Although we found some variation in responses, most sites reported limited use of CME to advance QI and patient safety in their organizations. In contrast, most sites reported both moderate to very positive support from both the medical education and clinical leadership for aligning these efforts and a culture of continuous QI rather than simply meeting accreditation requirements. All sites reported ACCME accreditation. At the start of the pilot, five sites reported Accreditation with Commendation status (meeting additional criteria for performance-based CME). At the end of the pilot, eight did so. Most sites were in the planning phase for their reaccreditation self-studies and intended to describe their participation in the pilot in their reports.
Our follow-up telephone interviews and e-mail communications with at least one representative from each site elicited some common findings. First, all sites identified clinical priorities and linked them to health professions education with the goal of improved processes and outcomes. Most focused on national priorities and the performance requirements of the Joint Commission and health care payers (primarily the Centers for Medicare and Medicaid Services), which we described in our content bundles. Second, all sites were able to implement or augment the cross-representation of QI and CME staff on their respective committees. Third, to varying degrees, all sites improved their internal CME programming to include the use of quality data for needs assessments and outcomes analysis—for example, by implementing or strengthening internal quality-based rounds. Fourth, representatives from each site articulated an increased visibility and value in the CME office as a channel, change agent, facilitator, and provider of resources for improvement. Fifth, staff competencies were changed to align with the mission of QI, including quality data collection and analysis for needs assessment and outcomes evaluation. Staff in clinical departments also recognized that the CME office had a role in facilitating the use of quality data for medical education. Finally, all sites described the value of a recognized national organization like the AAMC as a catalyst for change, with interviewees indicating that the pilot allowed them to advance the integration of QI and CME in ways that may not have been possible without an external intervention.
Examples of site-specific changes.
We found many positive site-specific changes as a result of the ae4Q pilot, including the following high-impact improvements:
- A new process for developing morbidity and mortality conferences that include data and feedback from each of the health care professionals involved in the case and providing CME credit based on a QI approach.
- CME staff involvement on root cause analysis teams, leading to educational interventions supporting improvement.
- A template for institutional review board review of performance improvement studies.
- Cross-departmental training in quality and an interprofessional education program in QI.
- Use of baseline performance data combined with education through scheduled CME activities and additional self-directed learning activities to achieve measurable practice changes.
- Performance/QI faculty leads appointed in clinical departments, for instance, vice chair of quality.
- Promotion of the use of patient satisfaction and patient safety data (available on the health system’s intranet) for grand rounds.
- System-wide Lean Six Sigma training for clinicians and educators.
- Focus on performance improvement CME at five affiliate hospitals with specific action plans for CME committees at each affiliate.
- A performance improvement CME newsletter to clinical faculty focused on the elements and value of performance-based CME.
Two pilot sites attributed measureable improved clinical outcomes to their participation in the ae4Q pilot. One reported a dramatic and sustained 54% decrease in VTE incidence as a result of a multimodal, interprofessional, system-wide educational and QI campaign.13 At another site, from October to December 2011, the use of blood products decreased significantly.14
Postintervention assessment survey results.
Of the 24 parameters in our pre/postintervention survey, we found noteworthy changes to the responses to 3. The greatest change was an increase in the reported use of interactive, interprofessional, innovative education methods, such as QI rounds, case discussions, and performance-based feedback, to replace didactic CME formats. We also found an increase in CME content based on performance improvement priorities and those that include quality measures and tools. These new formats augmented the existing trend for programs to focus on clinical knowledge and to be designed for a physician-only audience. Further, we found an increase in reported collaboration between QI and CME, extending beyond the limited use of CME to advance QI priorities.
We also noted some changes in the structures of CME committees toinclude QI experts, the use of clinical data in educational planning, and the availability of clinical data to education planners.
Implications of the ae4Q Pilot Initiative
Although our pilot resulted in clear examples of the successful alignment of CME and QI, along with important clinical, educational, and organizational outcomes, it also has important limitations. First, it is a pilot, comprising volunteer organizations that were selected because of their potential for success. Such a small and selective sample makes generalizability of our findings difficult. Other organizations with fewer champions and resources may not have similar outcomes. Second, a question of causality exists—the degree to which the ae4Q initiative by itself contributed to the changes we observed is unclear. Third, activities, interventions, and reporting were led by the organizations’ CME units. Given that the CME units undertook the data collection and reporting, biases toward CME may have influenced their reporting and outcomes.
Despite these limitations, our results support several conclusions. First, CME is no longer an adequate description of the performance improvement efforts that academic CME offices are demanding. New skills for measuring practice gaps are needed for CME, in collaboration with clinical quality measurement teams, to move from promoting simple knowledge enhancement to focusing on broader improvements in the competency and performance of clinicians and systems. Second, academic medical systems, which support the alignment of QI and CME, may improve their organizational and educational processes, thus improving health care outcomes. This recognition of the need for alignment of QI and CME calls for performance-based funding models to support systematic educational approaches. Further, as AMCs form integrated systems that include regional providers, aligned CME structures may serve as the nexus of clinical improvement for entire communities, with health information technology serving an integral role in the connection and communication between organizations, the improvement of processes, and the measurement of outcomes.
Finally, we believe that we have developed a robust process that links CME elements to other AMC components. Our pilot demonstrated that aligning clinical quality measurement with CME may contribute to improved clinical and educational outcomes. Academic CME offices need access to clinical data if they are to identify organizational and individual practice gaps, develop educational interventions for improvement, and evaluate outcomes. In turn, AMC QI enterprises need to employ effective educational interventions to achieve robust clinical improvement outcomes.
Although many barriers remain, an on-site, consultative, supportive initiative led by a national organization can serve to align CME and QI efforts as our ae4Q initiative did. This alignment can contribute to meaningful performance improvement and better clinical outcomes. The AAMC has used these findings to create resources and ongoing services to support AMCs as they pursue efforts to align QI and CME.
1. Leape LL, Berwick DM. Five years after To Err Is Human: What have we learned? JAMA. 2005;293:2384–2390
2. Batalden P, Davidoff F. Teaching quality improvement: The devil is in the details. JAMA. 2007;298:1059–1061
3. Bennett NL, Davis DA, Easterling WE Jr, et al. Continuing medical education: A new vision of the professional development of physicians. Acad Med. 2000;75:1167–1172
4. Regnier K, Kopelow M, Lane D, Alden E. Accreditation for learning and change: Quality and improvement as the outcome. J Contin Educ Health Prof. 2005;25:174–182
5. . American Board of Medical Specialties. About ABMS maintenance of certification. http://www.abms.org/Maintenance_of_Certification/
. Accessed June 6, 2013
6. Davis DA, O’Brien MASilagy C, Haines A. Continuing medical education as a means of lifelong learning. Evidence-Based Practice in Primary Care. 20012nd ed London, UK BMJ Books:142–156
7. Thomson O’Brien MA, Freemantle N, Oxman AD, Wold F, Davis DA, Herrin J. Continuing education meetings and workshops: Effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2001;2:CD003030
8. Marinopoulos SS, Dorman T, Ratanawongsa N, et al. Effectiveness of Continuing Medical Education. Rockville, Md: Agency for Healthcare Research and Quality; 2007. AHRQ publication no. 07-E006.
9. CME and Its Evolution in the Academic Medical Center: The 2011 AAMC/SACME Harrison Survey. 2011 Washington, DC Association of American Medical Colleges and Society of Academic Continuing Medical Education https://members.aamc.org/eweb/upload/CME%20and%20Its%20Evolution%20in%20the%20Academic%20Medical%20Center%20The%202011AAMCSACME%20Harrison%20Survey.pdf
. Accessed June 6, 2013
10. . Accreditation Council for Continuing Medical Education. 2011 annual report data. 2012 http://www.accme.org/education-and-support/video/commentary/2011-annual-report-data
. Accessed June 6, 2013
11. W.K. Kellogg Foundation. Logic Model Development Guide: Using Logic Models to Bring Together Planning, Evaluation, and Action. 2001 Battle Creek, Mich W.K. Kellogg Foundation
12. Anderson RA, McDaniel RR Jr. Managing health care organizations: Where professionalism meets complexity science. Health Care Manage Rev. 2000;25:83–92
13. Pingleton SK, Carlton E, Moncure M, et al. Reduction of venous thromboembolism (VTE) in hospitalized patients: A multidisciplinary, interprofessional approach aligning education with quality improvement. AAMC iCollaborative. May 2, 2012 https://www.mededportal.org/icollaborative/resource/174
. Accessed June 6, 2013
14. LeBlond RF. Chief quality officer, University of Iowa Hospitals and Clinics. Personal communication with N. Davis, January 2012.