During our site visits, stakeholders from across the organization participated in a number of activities. Although the AMC’s CME office was our entry point for the pilot, we expected the health system’s QI and other relevant leaders to be engaged and supportive of the initiative and the changes that we recommended. Site visit meetings included representatives from the CME office at the associate/assistant dean and/or director level as well as from the QI and graduate medical education leaders. Whenever possible, we met with the medical school dean and chief medical officer and/or chief executive officer of the teaching hospital system. Meeting agendas included a description of the initiative and its purpose as well as a discussion regarding each site’s proposed work. Our expectation was that this work would include efforts to align QI and CME with the goal of improving health care delivery.
In addition, we conducted workshops for the faculty and staff who develop grand rounds and morbidity and mortality conferences. We invited QI and CME professionals to participate to foster collaboration across these groups. Our goal for these workshops was to reframe such conferences to focus on organizational improvement priorities. Ideally, participants would use these new “quality-based rounds” to employ clinical performance data for needs assessments, identify and seek consensus on interventions for improvement, and assess postintervention outcomes data.
Next, to create a community of learners, we convened meetings with the ae4Q participants at national conferences. For example, a session at the 2011 AAMC annual meeting included presentations of preliminary results from two pilot sites along with a tutorial on the logic model as a method for outcomes evaluation.
To facilitate the integration of QI and effective educational interventions, we created a Web site (www.aamc.org/ae4q). On the basis of the pilot sites’ clinical improvement priorities, we created “content bundles” that included resources from the relevant literature and education and QI tools in specific clinical areas (e.g., venous thromboembolism [VTE] prophylaxis, hospital readmission, sepsis, surgical checklists, and others). Other faculty and staff development tools also are available on the site. Further, we provided guidance and consultation for the pilot sites to mentor individuals and groups within their academic health systems as they developed learning organization functions.12 We also collated, developed, and disseminated educational resources to the participating AMCs.
Evaluating the ae4Q pilot initiative
To evaluate our pilot, we used postintervention surveys, follow-up interviews with staff at the participating AMCs, activity update reports from each site, and reviews of the logic models submitted by the sites.
Preintervention readiness assessment survey results.
The results of the preintervention survey showed wide variation among the pilot sites’ organizational structures, including medical school/hospital relationships, representation of QI expertise on CME committees, and the presence of efforts to align health care delivery with community, system, or patient needs. We found less variation in educational methods and formats; most sites used mainly didactic curricula in conference settings, delivering primarily clinical content to physician audiences. Sites reported that clinical data were moderately available and used primarily by the QI enterprise as opposed to informing medical education programs. Although we found some variation in responses, most sites reported limited use of CME to advance QI and patient safety in their organizations. In contrast, most sites reported both moderate to very positive support from both the medical education and clinical leadership for aligning these efforts and a culture of continuous QI rather than simply meeting accreditation requirements. All sites reported ACCME accreditation. At the start of the pilot, five sites reported Accreditation with Commendation status (meeting additional criteria for performance-based CME). At the end of the pilot, eight did so. Most sites were in the planning phase for their reaccreditation self-studies and intended to describe their participation in the pilot in their reports.
Our follow-up telephone interviews and e-mail communications with at least one representative from each site elicited some common findings. First, all sites identified clinical priorities and linked them to health professions education with the goal of improved processes and outcomes. Most focused on national priorities and the performance requirements of the Joint Commission and health care payers (primarily the Centers for Medicare and Medicaid Services), which we described in our content bundles. Second, all sites were able to implement or augment the cross-representation of QI and CME staff on their respective committees. Third, to varying degrees, all sites improved their internal CME programming to include the use of quality data for needs assessments and outcomes analysis—for example, by implementing or strengthening internal quality-based rounds. Fourth, representatives from each site articulated an increased visibility and value in the CME office as a channel, change agent, facilitator, and provider of resources for improvement. Fifth, staff competencies were changed to align with the mission of QI, including quality data collection and analysis for needs assessment and outcomes evaluation. Staff in clinical departments also recognized that the CME office had a role in facilitating the use of quality data for medical education. Finally, all sites described the value of a recognized national organization like the AAMC as a catalyst for change, with interviewees indicating that the pilot allowed them to advance the integration of QI and CME in ways that may not have been possible without an external intervention.
Examples of site-specific changes.
We found many positive site-specific changes as a result of the ae4Q pilot, including the following high-impact improvements:
- A new process for developing morbidity and mortality conferences that include data and feedback from each of the health care professionals involved in the case and providing CME credit based on a QI approach.
- CME staff involvement on root cause analysis teams, leading to educational interventions supporting improvement.
- A template for institutional review board review of performance improvement studies.
- Cross-departmental training in quality and an interprofessional education program in QI.
- Use of baseline performance data combined with education through scheduled CME activities and additional self-directed learning activities to achieve measurable practice changes.
- Performance/QI faculty leads appointed in clinical departments, for instance, vice chair of quality.
- Promotion of the use of patient satisfaction and patient safety data (available on the health system’s intranet) for grand rounds.
- System-wide Lean Six Sigma training for clinicians and educators.
- Focus on performance improvement CME at five affiliate hospitals with specific action plans for CME committees at each affiliate.
- A performance improvement CME newsletter to clinical faculty focused on the elements and value of performance-based CME.
Two pilot sites attributed measureable improved clinical outcomes to their participation in the ae4Q pilot. One reported a dramatic and sustained 54% decrease in VTE incidence as a result of a multimodal, interprofessional, system-wide educational and QI campaign.13 At another site, from October to December 2011, the use of blood products decreased significantly.14
Postintervention assessment survey results.
Of the 24 parameters in our pre/postintervention survey, we found noteworthy changes to the responses to 3. The greatest change was an increase in the reported use of interactive, interprofessional, innovative education methods, such as QI rounds, case discussions, and performance-based feedback, to replace didactic CME formats. We also found an increase in CME content based on performance improvement priorities and those that include quality measures and tools. These new formats augmented the existing trend for programs to focus on clinical knowledge and to be designed for a physician-only audience. Further, we found an increase in reported collaboration between QI and CME, extending beyond the limited use of CME to advance QI priorities.
We also noted some changes in the structures of CME committees toinclude QI experts, the use of clinical data in educational planning, and the availability of clinical data to education planners.
Implications of the ae4Q Pilot Initiative
Although our pilot resulted in clear examples of the successful alignment of CME and QI, along with important clinical, educational, and organizational outcomes, it also has important limitations. First, it is a pilot, comprising volunteer organizations that were selected because of their potential for success. Such a small and selective sample makes generalizability of our findings difficult. Other organizations with fewer champions and resources may not have similar outcomes. Second, a question of causality exists—the degree to which the ae4Q initiative by itself contributed to the changes we observed is unclear. Third, activities, interventions, and reporting were led by the organizations’ CME units. Given that the CME units undertook the data collection and reporting, biases toward CME may have influenced their reporting and outcomes.
Despite these limitations, our results support several conclusions. First, CME is no longer an adequate description of the performance improvement efforts that academic CME offices are demanding. New skills for measuring practice gaps are needed for CME, in collaboration with clinical quality measurement teams, to move from promoting simple knowledge enhancement to focusing on broader improvements in the competency and performance of clinicians and systems. Second, academic medical systems, which support the alignment of QI and CME, may improve their organizational and educational processes, thus improving health care outcomes. This recognition of the need for alignment of QI and CME calls for performance-based funding models to support systematic educational approaches. Further, as AMCs form integrated systems that include regional providers, aligned CME structures may serve as the nexus of clinical improvement for entire communities, with health information technology serving an integral role in the connection and communication between organizations, the improvement of processes, and the measurement of outcomes.
Finally, we believe that we have developed a robust process that links CME elements to other AMC components. Our pilot demonstrated that aligning clinical quality measurement with CME may contribute to improved clinical and educational outcomes. Academic CME offices need access to clinical data if they are to identify organizational and individual practice gaps, develop educational interventions for improvement, and evaluate outcomes. In turn, AMC QI enterprises need to employ effective educational interventions to achieve robust clinical improvement outcomes.
Although many barriers remain, an on-site, consultative, supportive initiative led by a national organization can serve to align CME and QI efforts as our ae4Q initiative did. This alignment can contribute to meaningful performance improvement and better clinical outcomes. The AAMC has used these findings to create resources and ongoing services to support AMCs as they pursue efforts to align QI and CME.
1. Leape LL, Berwick DM. Five years after To Err Is Human: What have we learned? JAMA. 2005;293:2384–2390
2. Batalden P, Davidoff F. Teaching quality improvement: The devil is in the details. JAMA. 2007;298:1059–1061
3. Bennett NL, Davis DA, Easterling WE Jr, et al. Continuing medical education: A new vision of the professional development of physicians. Acad Med. 2000;75:1167–1172
4. Regnier K, Kopelow M, Lane D, Alden E. Accreditation for learning and change: Quality and improvement as the outcome. J Contin Educ Health Prof. 2005;25:174–182
6. Davis DA, O’Brien MASilagy C, Haines A. Continuing medical education as a means of lifelong learning. Evidence-Based Practice in Primary Care. 20012nd ed London, UK BMJ Books:142–156
7. Thomson O’Brien MA, Freemantle N, Oxman AD, Wold F, Davis DA, Herrin J. Continuing education meetings and workshops: Effects on professional practice and health care outcomes. Cochrane Database Syst Rev. 2001;2:CD003030
8. Marinopoulos SS, Dorman T, Ratanawongsa N, et al. Effectiveness of Continuing Medical Education. Rockville, Md: Agency for Healthcare Research and Quality; 2007. AHRQ publication no. 07-E006.
11. W.K. Kellogg Foundation. Logic Model Development Guide: Using Logic Models to Bring Together Planning, Evaluation, and Action. 2001 Battle Creek, Mich W.K. Kellogg Foundation
12. Anderson RA, McDaniel RR Jr. Managing health care organizations: Where professionalism meets complexity science. Health Care Manage Rev. 2000;25:83–92
13. Pingleton SK, Carlton E, Moncure M, et al. Reduction of venous thromboembolism (VTE) in hospitalized patients: A multidisciplinary, interprofessional approach aligning education with quality improvement. AAMC iCollaborative. May 2, 2012 https://www.mededportal.org/icollaborative/resource/174
. Accessed June 6, 2013
© 2013 by the Association of American Medical Colleges
14. LeBlond RF. Chief quality officer, University of Iowa Hospitals and Clinics. Personal communication with N. Davis, January 2012.