Skip Navigation LinksHome > December 2009 - Volume 84 - Issue 12 > Methodological Rigor of Quality Improvement Curricula for Ph...
Academic Medicine:
doi: 10.1097/ACM.0b013e3181bfa080
Quality and Safety

Methodological Rigor of Quality Improvement Curricula for Physician Trainees: A Systematic Review and Recommendations for Change

Windish, Donna M. MD, MPH; Reed, Darcy A. MD, MPH; Boonyasai, Romsai T. MD, MPH; Chakraborti, Chayan MD; Bass, Eric B. MD, MPH

Free Access
Article Outline
Collapse Box

Author Information

Dr. Windish is assistant professor, Department of Internal Medicine, Yale University School of Medicine, New Haven, Connecticut.

Dr. Reed is assistant professor, Department of Internal Medicine, Mayo Clinic College of Medicine, Rochester, Minnesota.

Dr. Boonyasai is instructor, Department of Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland.

Dr. Chakraborti is assistant professor, Department of Medicine, Tulane University Health Sciences Center, New Orleans, Louisiana.

Dr. Bass is professor of medicine, Department of Medicine, Johns Hopkins University School of Medicine, Baltimore, Maryland.

Correspondence should be addressed to Dr. Windish, Yale Primary Care Residency Program, 64 Robbins Street, Waterbury, CT 06708; telephone: (203) 573-6751; fax: (203) 573-6707; e-mail: (donna.windish@yale.edu).

Collapse Box

Abstract

Purpose: To systematically determine whether published quality improvement (QI) curricula for physician trainees adhere to QI guidelines and meet standards for study quality in medical education research.

Method: The authors searched MEDLINE, EMBASE, CINAHL, and ERIC between 1980 and April 2008 for physician trainee QI curricula and assessed (1) adherence to seven domains containing 35 QI objectives, and (2) study quality using the Medical Education Research Study Quality Instrument (MERSQI).

Results: Eighteen curricula met eligibility criteria; 5 involved medical students and 13 targeted residents. Three curricula (18%) measured health care outcomes. Attitudes about QI were high, and many behavior and patient-related outcomes showed positive results. Curricula addressed a mean of 4.3 (SD 1.8) QI domains. Student initiatives included 38.2% [95% CI, 12.2%–64.2%] beginning student-level objectives and 23.0% [95% CI, −4.0% to 50.0%] advanced student-level objectives. Resident curricula addressed 42.3% [95% CI, 29.8%–54.8%] beginning resident-level objectives and 33.7% [95% CI, 23.2%–44.1%] advanced resident-level objectives. The mean (SD) total MERSQI score was 9.86 (2.92) with a range of 5 of 14 [total possible range 5–18]; 35% of curricula demonstrated lower study quality (MERSQI score ≤ 7). Curricula varied widely in quality of reporting, teaching strategies, evaluation instruments, and funding obtained.

Conclusions: Many QI curricula in this study inadequately addressed QI educational objectives and had relatively weak research quality. Educators seeking to improve QI curricula should use recommended curricular and reporting guidelines, stronger methodologic rigor through development and use of validated instruments, available QI resources already present in health care settings, and outside funding opportunities.

Tens of thousands of Americans die each year as a consequence of medical errors, and hundreds of thousands more are affected by or narrowly escape nonfatal injuries that would be prevented by a high-quality health care system.1 The science of quality improvement (QI) has been identified as an appropriate method for addressing safety issues in health care,2,3 yet few physicians are trained in QI methods. It is imperative that all physicians acquire core QI knowledge and skills in order to influence the system transformations that are necessary to improve health outcomes.4

Several organizations have recognized that achieving transformational changes in health care systems will require fundamental changes in medical education. The Health Resources and Services Administration's Undergraduate Medical Education for the 21st Century project funded medical schools to implement and evaluate student curricula in nine content areas including quality measurement and improvement.5 As part of the Accreditation Council on Graduate Medical Education's (ACGME) Outcomes Project, all training programs must demonstrate that their residents have acquired six competencies including practice-based learning and improvement and systems-based practice.6 In 2006, Phase 3 of the ACGME's Outcome Project began.7 This phase requires full integration of the six competencies and their assessment with learning objectives and clinical care. Because educators are required to implement curricula and demonstrate success, Ogrinc et al8 proposed a framework for teaching practice-based learning and improvement to medical students and residents. Their work detailed 35 QI objectives originating from seven domains for health care improvement as recommended by the Institute for Healthcare Improvement (IHI), an independent not-for-profit organization, which aims at improving quality of care worldwide.9 The impact of these guidelines in producing effective QI curricula, however, is unknown.

Recently, educational leaders have emphasized the need for evidence in medical education.10–14 This effort stresses responsibility in understanding the educational processes, that is, how and what we teach, and outcomes with the goal of demonstrating physicians’ competence as they enter the workforce.11–13 To be successful in this endeavor, educators must critically evaluate the content of what is being taught, how it is taught, and the resulting outcomes.11–13

In 2007, Boonyasai et al15 reported mixed results in a systematic review of the effectiveness of teaching QI to clinicians of all levels and disciplines. To gain a better understanding of the strength of the teaching and evaluation methods used in QI curricula for physician trainees specifically, we performed a systematic review to determine whether published QI curricula for medical students and residents adhere to (1) guidelines for teaching specific domains of practice-based learning and improvement and (2) established standards for assessing the quality of medical education research.

Back to Top | Article Outline

Method

Literature search

We sought relevant studies between January 1, 1980 and April 30, 2008 using the electronic databases of MEDLINE, Excerpta Medica (EMBASE), the Cumulative Index of Nursing and Allied Health Literature (CINAHL), and the Education Resource Information Center (ERIC). The search was limited to English-language articles using the medical subject headings (MeSH) health care economics and organizations; health care quality, access, and evaluation; health care facilities; manpower, and services; health services administration; information management; informatics; medical informatics, and systems analysis. We combined this with quality terms [quality improvement, quality management, continuous improvement, performance improvement, (improve/improving/manage/managing AND quality), QI, CQI, TQM, quality assurance, quality assessment, patient safety, practice based, system based, systems based, plan do study act, plan do check act, pdsa, pdca, six sigma and lean management] and education terms [curriculum, curricul$, educat$, teach$, train$, learn$, and the exploded MeSH term curriculum]. We exploded all MeSH terms to include subheadings. To assess for possible unpublished curricula, we reviewed all available research and educational abstracts from the Web sites of national organizations from 2004 to 2007 including the Association of American Medical Colleges, the Society of General Internal Medicine, and the Society of Teachers of Family Medicine. We contacted all authors of abstracts matching our inclusion criteria and asked if their curriculum was published or intended to be published, if they taught QI theory, and if any outcomes were measured. Finally, we queried experts in the field of QI and medical education and conducted a hand search for references in all included articles, relevant review articles, and the tables of contents of key journals from September 2006 through April 2008. ProCite version 5.0.3 (Thompson ISI ResearchSoft, Berkeley, California) was used to store and track the search results.

Back to Top | Article Outline
Eligibility criteria

We included articles if they described a curriculum for teaching QI theory to medical students or residents and had an evaluation. We defined curriculum as a formal supervised program for changing knowledge, skill, or behaviors.16 We defined QI theory as a set of principles that involve knowledge, skills, and methods used to evaluate and implement change in a health care system using a systems-based approach.17 Key words indicating a QI initiative included quality improvement, quality management, continuous quality improvement, performance improvement, total quality management, quality assurance, patient safety, practice-based learning, systems based practice, plan do study act, plan do check act, and six sigma. Although all curricula should include the components of QI theory, the extent to which each curriculum addresses these components may vary. A curriculum was considered to have an evaluation component if it reported at least one subjective or objective outcome. We excluded studies if they did not (1) occur in North America, Western Europe, Australia, New Zealand, or Japan, (2) teach QI theory, (3) describe a curricular intervention, (4) target medical student or resident trainees, (5) include original data, (6) have an evaluation, or (7) have a full article available for review.

Back to Top | Article Outline
Title and abstract review

Reviewer pairs independently assessed citation titles for eligibility. We returned citations for adjudication in cases of disagreement and retained them if final agreement could not be reached. Two independent reviewers reviewed abstracts of remaining citations and retained full articles if eligibility could not be determined.

Back to Top | Article Outline
Study review

Independent data abstraction occurred from all remaining articles using a standardized form to confirm article eligibility and abstract study content. We resolved disagreements by consensus.

We evaluated curricular reports for the presence of QI objectives and methodological quality. We based QI objectives on a framework for teaching practice-based learning and improvement by Ogrinc et al.8 The recommendations of Ogrinc and colleagues stem from a literature review of prior QI curricula and from input of educational experts regarding ideal QI-teaching content and strategies. Their educational guidelines originated from seven domains for improvement of health care as recommended by the IHI9: (1) customer knowledge, (2) measurement, (3) making change, (4) developing new, locally useful knowledge, (5) health care as a system, (6) collaboration, and (7) social context and accountability. Using Dreyfus and Dreyfus’18 levels of professional development of knowledge and performance, Ogrinc et al created 35 total objectives for knowledge acquisition and application for four different learner levels: beginning student (12 objectives), advanced student (7), beginning resident (8), and advanced resident (8). We used their guidelines to assess how many of the domains and objectives were taught and whether these differed by learner level.

We assessed curricular structure, content, educational objectives, evaluation, and results of each curriculum. We categorized curricular outcomes as (1) participation outcomes addressing learners’ views on the learning experience, (2) attitude outcomes describing changes in learners’ perceptions or attitudes, (3) knowledge outcomes, including acquisition of concepts, procedures, or principles, (4) skills outcomes encompassing acquisition of problem-solving, psychomotor, and social skills, (5) behavior outcomes addressing documented transfer of learning to learners’ actions, (6) process outcomes addressing wider changes in the organizational delivery of care that are attributable to the educational program, and (7) patient outcomes. Outcomes for pre–post assessments and controlled trials were categorized as “better,” “worse,” “no change,” or “not reported” on the basis of the description provided and/or statistical analyses presented by the study authors. Outcomes for postintervention only studies were measured as “favorable,” “unfavorable,” “unclear,” or “not reported” on the basis of the description provided in the text.

We assessed the methodological quality of curricula using the Medical Education Research Study Quality Instrument (MERSQI).19 The MERSQI is a 10-item instrument used to evaluate the methodological quality of quantitative studies in medical education. The 10 items reflect six domains of study quality: study design, sampling, data type (subjective or objective), validity of assessments, data analysis, and outcomes. The maximum score for each domain is 3, and total scores range from 5 to 18. The total MERSQI score is calculated as the sum of item scores with appropriate reductions in the denominator for “not applicable” responses. Total scores are adjusted to a denominator of 18 to allow for comparison of scores across studies. High reliability, internal structure, content, and criterion validity evidence have been established for MERSQI scores.19

Back to Top | Article Outline
Statistical analysis

Because of the heterogeneity of curricular evaluations and results, we were unable to combine outcomes to determine overall effects. Instead, we relied on the P values presented by individual authors to assess statistical significance using a cutoff level of ≤.05. We hypothesized that having curricular funding, trainee level, and year of publication after 2003 (the implementation year of the ACGME's Outcomes Project) may influence outcomes. We used two-tailed chi-square tests to assess differences in proportions and either a two-tailed t test or Wilcoxon rank sum test to assess differences in continuous variables. We considered a P value < .05 as statistically significant for each test. We determined interrater reliability for items in full article abstraction, including separate ratings for QI content and MERSQI questions, using kappa. We performed all analyses using Stata release 8.2 (Stata Corp, College Station, Texas).

Back to Top | Article Outline

Results

Interrater reliability

Interrater reliability was high for all three areas evaluated: (1) QI content (κ = 0.74), (2) MERSQI scores (κ = 0.89), and (3) all remaining items relating to curricular characteristics (κ = 0.84).

Back to Top | Article Outline
Characteristics of eligible curricula

Of 16,897 citations obtained through electronic searching, 17 met eligibility criteria (see Figure 1). Of 4,811 abstracts from national organization Web sites, 10 matched our inclusion criteria: 4 were already retained from our electronic search; 4 met inclusion criteria but, because of author time limitations, had been not submitted for publication; 1 was available electronically; and 1 was unknown.

Figure 1
Figure 1
Image Tools

Table 1 summarizes the curricular characteristics. Twelve curricula (67%) were published after 2003.20–31 One study occurred at two institutions,22 but the remainder (94%) occurred at single institutions. Twelve studies (67%) cited a funding source.21–24,26,27,29–34 Three reports (17%) completely described learner characteristics including age or years of experience, gender, profession, and training level.22,25,27 Thirteen curricula (72%) taught residents and encompassed generalist and surgical specialties.21–26,28,30–36 Three of these efforts (18%) included faculty attendings, fellows, and other ancillary staff.24,30,35 The remaining five curricula (29%)20,27,29,32,36 targeted medical students, with one including nursing and pharmacy students.20

Table 1
Table 1
Image Tools
Back to Top | Article Outline
Curricular structure and content

As seen in Table 2, only two reports (11%) adequately described a local needs assessment and explained how the curriculum was tailored to meet the needs of the learners.26,31 Eight articles (44%) did not describe a needs assessment in any way.20,24,27–29,33,34,37 Eight reports (44%) described educational strategies in enough detail to allow replication.20,21,24–26,33,36,37 All studies described both didactic instruction along with experiential learning in the form of a project. Projects ranged from performing a root cause analysis in multidisciplinary groups for hypothetical clinical scenarios20 to conducting chart audits with recommendations and feedback at office practice sites.29,31,33,37

Table 2
Table 2
Image Tools
Table 2
Table 2
Image Tools

The most common didactic teaching method was small-group work (17 curricula, 94%).20–29,31–36 Other common methods included lectures (14; 78%),20–22,24–30,31–34,36 brainstorming (4; 22%),24,26,34,36 audio/visual material (3; 17%),21,24,37 mentoring (2; 11%),22,23 and role-playing (2; 11%).24,36 Eight curricula (44%) taught QI theory in combination with a clinical best practice.21,24,25,30–34,37 Ten curricula (59%)21,22,26,28,30,33–37 used specific conceptual models for QI including the Chronic Care Model38 and the industrial CQI (continuous QI) model.

Nine curricula (50%) adequately described the frequency and duration of educational sessions.20–23,26,28,31,36,37 The longest running curriculum occurred across four years of medical school, but the frequency and duration of these sessions were unclear.29 Others convened from once daily for three days to multiple sessions across several weeks or months. Ten initiatives (59%) were implemented more than once by the time of publication.22,23,25,26,28,29,32–34,36 No differences occurred with respect to (1) the number of times curricula were implemented and curricular funding (50% of those without funding were implemented only once, compared with 58% of those with funding, P = .74), (2) learner level (40% of medical student curricula were implemented once, compared with 62% of resident curricula, P = .41), or (3) date of publication (33% of curricula published after 2003 were implemented only once, versus 66% of those published in 2003 or earlier, P = .18).

Back to Top | Article Outline
QI educational objectives

Thirteen curricula (72%) clearly stated goals and objectives.20–23,26,28,30,31,32,34–37 Twelve (67%) specifically mentioned learning the principles and methods of QI as a primary objective.20–23,26–29,31,34,35,37 Curricula addressed a mean of 4.3 (SD 1.8) QI domains and were more likely to address the QI-specific educational domains of developing new, locally useful knowledge (18 curricula; 100%), making change (17; 94%), and measurement (15; 83%) (see Table 3). Fewer curricula addressed the domains of understanding health care as a system (10, 56%)21–25,27–29,31,34 or social context and accountability (9, 50%).21,24,29–33,35,37

Table 3
Table 3
Image Tools

Resident curricula addressed more QI domains than medical student curricula; however, they were not statistically significantly different (4.6 versus 3.4, P = .22). Both medical student and resident curricula addressed a similar percentage of their specific learner-level QI objectives: students 32.6% [95% CI, 9.34%–55.9%] versus residents 38.0% [95% CI, 27.4%–48.6%], P = .58. Student initiatives included 38.2% [95% CI, 12.2%–64.2%] beginning student-level objectives and 23.0% [95% CI, −4.0% to 50.0%] advanced student-level objectives. Resident curricula addressed 42.3% [95% CI, 29.8%–54.8%] beginning resident-level objectives and 33.7% [95% CI, 23.2%–44.1%] advanced resident-level objectives.

O’Connell et al's29 undergraduate curriculum in systems-based care used 12 of 19 (63%) objectives in six of seven medical student domains. Although customer knowledge was not addressed, learners used measurement principles to perform a quality assessment at their community-based practice sites, applied new knowledge in a QI project of their patient panel, and described the business case for quality in health care, which were all resident-level objectives.

The presence of curricular funding was associated with a higher number of objectives being addressed for (1) customer knowledge 2 [(interquartile range) 1–3] versus 0 [0–0.25], P < .001 and (2) developing new, locally useful knowledge 3 [2–4] versus 2 [1–2], P = .01. No differences in addressing QI objectives were seen that were based on year of publication.

Back to Top | Article Outline
Evaluation and results
Evaluation strategy.

The most frequent curricular evaluation strategy was pretest/posttest (56%),21,22,26,28–31,33,34,36 followed by post only (28%),20,23,32,35,37 nonrandomized, two-group design (11%),22,25 and a randomized controlled trial (6%)27 (see Table 4). The randomized controlled trial was an early and late intervention of medical students to ensure all learners received equal instruction.27 It was the only curriculum to discuss a power analysis and subsequently demonstrate sufficient sample size to draw conclusions. No curricula used blinding during assessments.

Table 4
Table 4
Image Tools
Table 4
Table 4
Image Tools
Table 4
Table 4
Image Tools
Table 4
Table 4
Image Tools
Back to Top | Article Outline
Participation outcomes.

Nine initiatives (50%) measured curricular satisfaction.22–25,27,31,32,35,37 Participant satisfaction with the curricula occurred in all but one site.22 One curriculum (6%) used participation outcomes as their only means to assess curricular success.35

Back to Top | Article Outline
Attitude outcomes.

Fourteen initiatives (78%) addressed participant attitudes.20–29,31–33,36 It was the most rigorous outcome in five (28%).20,21,28,29,32 Most attitudes addressed confidence in learner knowledge and self-assessed proficiency in developing a study aim, identifying measures, and performing QI activities. Favorable results occurred in most measures evaluated.

Back to Top | Article Outline
Knowledge outcomes.

Six curricula (33%) measured knowledge assessments22,24,26,27,33,37 and produced positive results in 49% of the total outcomes measured. Three curricula (17%)22,24,27 used the Quality Improvement Knowledge Application Tool (QIKAT)39 which is an instrument demonstrating good discriminant validity in assessing knowledge application in process improvement concepts. The QIKAT allows residents to evaluate scenarios and subsequently provide an aim, measures, and a proposal for change through a potential QI project. The answers are then scored on a total scale of 0 to 15 by trained raters. Ogrinc et al27 adapted the instrument for medical students and demonstrated higher scores in intervention students compared with controls.

Back to Top | Article Outline
Behavior/process outcomes.

Seven curricula (35%) used outcomes addressing changes in clinician behaviors.24,25,30,31–34,36 Common types included chart documentation of patient health care issues24,32,35 and using appropriate testing and interventions such as foot exams25,33 and immunizations.34 Two initiatives (11%) found mixed results,25,33 three had positive results,31,34,36 and two did not have a statistical assessment.24,30

Back to Top | Article Outline
Patient outcomes.

Patient care outcomes occurred least frequently (three efforts; 17%).25,33,34 Two curricula (11%) evaluated diabetes measures and showed improved HbA1c levels.25,33 One curriculum25 also measured and demonstrated improvement in LDL cholesterol but did not see changes in systolic or diastolic blood pressure. This was the only curriculum to adequately describe the setting and patient population along with baseline patient characteristics.

Back to Top | Article Outline
Costs and barriers.

Ten curricula (56%) discussed financial or human resource costs encountered during curricular development and implementation21,22,25,27,28,30,31,33,35,37 (see Table 4). Only two (11%) reported a monetary amount incurred.30,31 Five curricula (28%) discussed an excessive amount of time needed for development and/or implementation.25,27,28,35,37 Seven (39%) mentioned administrative support.24,25,30,34–37

Fourteen initiatives (78%) described barriers encountered during implementation including time and “buy-in.”21–28,30,31,33,35–37 All barriers were reported by informal remarks without specific methods for evaluation. Some authors discussed ways to overcome common barriers. Varkey et al24 recommended involving multidisciplinary faculty, a project that was meaningful to all learners, mentorship, and learner collaboration with institutional QI leadership to prevent potential learner disinterest and dissatisfaction. Ellrodt35 described overcoming potential “buy-in” barriers from trainees by translating QI terms, processes, and tools into medical equivalents, choosing projects important to trainees and that are soluble, and evaluating progress with dissemination and feedback to the learners.

Back to Top | Article Outline
Methodological quality of curricula.

The mean (SD) total MERSQI score was 9.86 (2.92) and ranged from 5 to 14 (see Table 5). Six curricula (33%) demonstrated lower study quality with MERSQI scores ≤7. Four curricula (22%) had high scores ≥13. The curriculum with the highest total MERSQI score of 14 was a nonrandomized study at two institutions that used multiple measures to assess outcomes including the QIKAT.22 Most other curricula used cross-sectional or single-group pre–post tests and did not demonstrate validity evidence for evaluation instruments. No differences in MERSQI scores occurred that were based on (1) funding [no funding MERSQI score 9.17 (95% CI, 5.87–12.46) versus funding 10.21 (95% CI, 8.37–12.04), P = .49], (2) learner level [medical students 9.40 (95% CI, 5.30–13.50) versus residents 10.04 (95% CI, 8.29–11.78), P = .69], or (3) date of publication [2003 or prior 10.16 (95% CI, 7.01–13.33) versus after 2003 9.71 (95% CI, 7.80–11.62), P = .76].

Table 5
Table 5
Image Tools
Back to Top | Article Outline

Discussion

In response to national mandates to improve health care quality1,4,40 and physician competency,6,7 medical educators have developed an array of QI curricula for physician trainees. Several curricula in this review included objectives representing best practices for teaching QI8 and measured higher-level outcomes including learner behaviors and health care outcomes. However, use of QI educational strategies and methodological rigor of studies differed widely. In the subsequent sections of this discussion, we highlight the successes and problems identified by our systematic review in designing, evaluating, and publishing QI curricula for medical trainees and offer specific recommendations for future curriculum development and evaluation.

Back to Top | Article Outline
Heterogeneity in reporting of QI curricula

Our review indicates that QI curricula do not always adhere to a standard reporting format. Only three initiatives (17%) were rated “fair” or better for how well they described their needs assessment, goals and objectives, and educational strategies. Although all curricula discussed the content of their educational methods, only half provided enough detail to ensure replication. Feasibility and sustainability of curricula are also questionable, because few reported monetary or human resource costs needed for curricular development and maintenance. Reporting inadequacy may be explained by length restrictions imposed by journals or by authors’ limited awareness of guidelines for reporting curricular interventions.16,41–43

QI initiatives seek to invoke changes in behavior through experiential learning in a defined setting with a specific set of learners. Dissemination of such efforts should thus clearly define the problem at hand, learner characteristics, the intervention, the context in which the initiative occurs, and the results with measurable outcomes so that generalizability can be assessed. In addition to guidelines educators can use to report curricular interventions,16,41–43 Davidoff and colleagues44 developed a specific framework for reporting QI initiatives called SQUIRE (Standards for QUality Improvement Reporting Excellence). The goals of SQUIRE are to provide a step-by-step guide for describing QI efforts with the hope that standardization will allow better sharing of knowledge. A description of how to use the guidelines as well as specific examples of their use are currently available.45 Because these guidelines require detailed descriptions of QI efforts, print journals may need to consider electronic publishing of some or all of the content presented so that readers can fully appreciate the depth of QI efforts.

Back to Top | Article Outline
QI teaching strategies

The teaching strategies employed in these QI curricula addressed many objectives in Ogrinc et al's8 framework, but curricula differed in their breadth and depth of educational strategies. In most curricula, learners received a short duration of instruction and participated in just one cycle of change. This may be insufficient exposure or reinforcement to result in positive outcomes, especially in learner behaviors and health care improvements. In addition, the effect of the curricula on learners’ long-term behaviors was not assessed as no longitudinal follow-up was obtained. Thus, we cannot know whether these curricula produced sustained changes in physician behavior that may subsequently improve patient outcomes.

All resident-level initiatives allowed trainees to identify areas of change in their own education or clinical practice. Some of these efforts included instruction from QI leaders and multidisciplinary teamwork. Students may have fewer opportunities to achieve educational objectives given infrequent contact and limited roles with the health care system at their level of training. However, O’Connell et al29 successfully incorporated 16 objectives in a curriculum for students that spanned four years of medical school. Other undergraduate efforts addressed change methods through performance of a quality assessment at students’ community-based practice sites.29,33,37

To make QI a lifelong learning skill, educators may wish to develop curricula that span the undergraduate and graduate educational continuum, engage institutional leaders in QI, and include many cycles of change. Using Ogrinc et al's framework for developing QI curricula can help in these efforts. Early medical student initiatives should focus on learning the process of change in a collaborative setting with evaluative tools to measure success. Educators can focus on improvement efforts in the students’ own educational instruction, use faculty from different disciplines including the basic sciences, teach statistical methods needed to measure change, and report results back to the learners. As students become more engaged in clinical practices, efforts should focus on patient-related topics. Both student and resident initiatives should seek out readily available resources in health care settings and progress with experiential learning by linking QI theory with a clinical best practice. Partnering with hospital QI officers can reduce the burden on core faculty, increase the teaching of a multidisciplinary approach, and reduce costs as health care entities already financially support health care improvement efforts. Promoting curricula that address the link between health outcomes, care delivery, and professional development should lead to better collaborative initiatives and more sustainable health care changes.46

Back to Top | Article Outline
Methodological rigor of curricular evaluations

Many curricula reviewed used cross-sectional or posttest-only study designs. Fewer than one third of efforts reported validity evidence for evaluation instruments. Three curricula used the QIKAT for which discriminant validity has been established.39 Although 11 curricula (61%) used more than descriptive measures to assess outcomes, only 10 (56%) used statistical analyses, and 1 (6%) discussed power and sample size. Results of the remaining eight initiatives are open to interpretation regarding curricular success. Thus, the methodological rigor of curricula as measured by MERSQI scores was relatively low, yet comparable with other education research.19 Four curricula did have high MERSQI scores (≥13) suggesting that some educators in QI have adopted rigorous methods for evaluating their work. However, all but one curriculum in this sample were instituted at single sites, thus the generalizability of findings from the individual studies is unknown. The representativeness of results is also questionable for four curricula that had low or unreported response rates.

Although randomized controlled trials remain the gold standard with respect to study designs, educators rarely use this methodology. Some educators even debate the ethics of randomly assigning groups of learners to receive an intervention,47 especially when the intervention is felt to be important to all learners. Despite this, randomized interventions should still be considered when developing QI efforts. To avoid potential differential treatment, educators can subsequently provide the intervention to the remaining trainees after the assessment period has concluded. Alternative study designs include pre–post assessments and use of a concurrent or historical control group.

Using standardized assessment tools to measure QI outcomes may present a challenge, given the paucity of validated and reliable instruments for medical education initiatives.48,49 Currently, two validated assessment tools have been used in evaluating QI curricula: the QIKAT39 and the objective structured clinical exam.50 Future goals for educators should include the development of high-quality evaluation tools. Limited understanding of statistical analyses may also be preventing educators from implementing more formal ways of evaluating curricular outcomes. When developing evaluations, educators may benefit from published guides for selection and interpretation of statistical tests.51

Back to Top | Article Outline
Outcomes of QI curricula

Approximately 17% of curricula measured health care outcomes compared with fewer than 5% of studies in other samples of medical education research.19,52 This observation suggests that demonstrating links between educational interventions and health care outcomes may be more feasible in QI compared with other study types. Because the primary goal of QI initiatives is to improve health care, it is disappointing that more than 80% of curricula did not address patient outcomes.

Although challenges to measuring patient-level outcomes of educational interventions are well documented,11,53 the opportunity to advance outcomes-based educational research11,54 is possible through QI initiatives. Because the purpose of QI is to enhance the health of patients, in addition to measuring learning, educators of QI curricula should strive to demonstrate changes in patient outcomes resulting from QI education. Medical student curricula can address many patient-related educational objectives including making change and developing new, locally useful knowledge during the clerkship years. Using longitudinal practice sites for both student- and resident-level initiatives can allow for many cycles of change and address different health care outcomes. Collaborating with QI managers in hospitals and working through a root cause analysis of a patient-related problem and subsequently proposing guidelines to improve patient safety can also produce measurable health care outcomes.

Back to Top | Article Outline
Funding of curricula

Two thirds of curricula in this sample received funding, which is substantially higher than prior published medical education research.19,55 Although many authors acknowledged funding, 33% did not report funding outside of their institutions. Funding was associated with a greater number of QI objectives addressed for customer knowledge and developing new, locally useful knowledge. These domains require patient-level efforts and data collection, which may necessitate paid assistance.

Although it is unclear why many curricula were able to secure funding, we speculate that there may be more funding opportunities available for QI initiatives compared with other educational topics, particularly from public and private foundations, which accounted for 28% of funding received. Linking QI objectives with health care outcomes may interest many funding agencies, and thus educators should be encouraged to seek out these sources. Using resources already in place at health care facilities and partnering with QI offices may also be viable options to reduce cost and increase funds needed to carry out initiatives.

Limitations of our study should be considered. First, we evaluated QI content using Ogrinc et al's framework, which has content validity but no established reliability. Second, we did not assess the methodological quality of the qualitative components of curricula reports. Five curricula (28%) in this review used mixed quantitative and qualitative methods, but no curricula used qualitative methods only. Third, because of a lack of resources, we are unable to abstract articles, if they existed, in other languages, which may have omitted some articles. Fourth, our review may be subject to publication bias. To assess for this, we reviewed abstracts from Web sites of national organizations to determine whether curricula with poor or negative outcomes were not accepted for publication and, thus, omitted from our review. Our search identified only one abstract whose publication status was unknown. Thus, our thorough search indicates that publication bias may be less likely. Finally, because of the heterogeneity of educational strategies and evaluations, we were unable to combine outcomes across studies to determine overall effectiveness of QI efforts.

It is clear that broad-based transformational changes in the U.S. health care system are needed to enhance health and ensure safety of patients.1,4 QI curricula have a critical role in linking educational processes and health care outcomes, and thus such curricula are paramount in determining whether educational initiatives can change physician behavior and subsequently affect quality of care. The curricular efforts presented here in both undergraduate and graduate education demonstrated that QI can be taught to trainees and that significant changes in learner behavior and patient-level outcomes are possible. However, greater methodological rigor is needed to critically evaluate the effect of curricula on educational and health outcomes. Our specific recommendations for improving curriculum development and reporting should help educators and future curriculum developers strengthen QI curriculum design and evaluation.

Back to Top | Article Outline

References

1 Institute of Medicine. To Err Is Human: Building a Safer Health System. Washington, DC: National Academy Press; 1999.

2 Voss JD, May NB, Schorling JB, et al. Changing conversations: Teaching safety and quality in residency training. Acad Med. 2008;83:1080–1087.

3 Morrison LJ, Headrick LA. Teaching residents about practice-based learning and improvement. Jt Comm J Qual Patient Saf. 2008;34:453–459.

4 Institute of Medicine. Educating Health Professionals: A Bridge to Quality. Washington, DC: National Academy Press; 2003.

5 Pascoe J, Babbott D, Pye K, Rabinowitz H, Veit K, Wood D. The UME-21 project: Connecting medical education and medical practice. Fam Med. 2004;36(suppl):S12–S14.

6 Accreditation Council for Graduate Medical Education (ACGME). Outcome Project. Available at: http://www.acgme.org/Outcome. Accessed August 19, 2009.

7 Accreditation Council for Graduate Medical Education Outcome Project. Timeline—Working Guidelines. Available at: http://www.acgme.org/outcome/project/timeline/TIMELINE_index_frame.htm. Accessed August 19, 2009.

8 Ogrinc G, Headrick L, Mutha S, Coleman M, O’Donnell J, Miles P. A framework for teaching medical students and residents about practice-based learning and improvement, synthesized from a literature review. Acad Med. 2003;78:748–756.

9 Batalden P, Berwick D, Bisognano M, Splaine M, Baker G, Headrick L. Knowledge Domains for Health Professional Students Seeking Competency in the Continual Improvement and Innovation of Health Care. Boston, Mass: Institute of Healthcare Improvement; 1998.

10 Whitcomb M. Using clinical outcomes data to reform medical education. Acad Med. 2005;80:117.

11 Chen F, Bauchner H, Burstin H. A call for outcomes research in medical education. Acad Med. 2004;79:955–960.

12 Dauphinee W, Wood-Dauphinee S. The need for evidence in medical education: The development of best evidence medical education as an opportunity to inform, guide, and sustain medical education research. Acad Med. 2004;79:925–930.

13 Leach D. A model for GME: Shifting from process to outcomes. A progress report from the Accreditation Council for Graduate Medical Education. Med Educ. 2004;38:12–14.

14 Chen F, Burstin H, Huntington J. The importance of clinical outcomes in medical education research. Med Educ. 2005;39:350–351.

15 Boonyasai R, Windish D, Chakraborti C, Feldman L, Rubin H, Bass E. Effectiveness of teaching quality improvement to clinicians: A systematic review. JAMA. 2007;298:1023–1037.

16 Kern D, Thomas P, Howard D, Bass E. Curriculum Development for Medical Education: A Six-Step Approach. Baltimore, Md: Johns Hopkins University Press; 1998.

17 Institute of Medicine Committee on Health Profession Education Summit. Health Professions Education: A Bridge to Quality. Washington, DC: National Academy Press; 2003.

18 Dreyfus H, Dreyfus S. Mind Over Medicine. New York, NY: Free Press; 1982.

19 Reed D, Cook D, Beckman T, Levine R, Kern D, Wright S. Association between funding and quality of published medical education research. JAMA. 2007;298:1002–1009.

20 Horsburgh M, Merry A, Seddon M. Patient safety in an interprofessional learning environment. Med Educ. 2005;39:512–513.

21 Nuovo J, Balsbaugh T, Barton S, et al. Development of a diabetes care management curriculum in a family practice residency program. Dis Manag. 2004;7:314–324.

22 Ogrinc G, Headrick L, Morrison L, Foster T. Teaching and assessing resident competence in practice-based learning and improvement. J Gen Intern Med. 2004;19(5 pt 2):496–500.

23 Weingart S, Tess A, Driver J, Aronson M, Sands K. Creating a quality improvement elective for medical house officers. J Gen Intern Med. 2004;19:861–867.

24 Varkey P, Reller M, Smith A, Ponto J, Osborn M. An experiential interdisciplinary quality improvement education initiative. Am J Med Qual. 2006;21:317–322.

25 Holmboe E, Prince L, Green M. Teaching and improving quality of care in a primary care internal medicine residency clinic. Acad Med. 2005;80:571–577.

26 Djuricich A, Ciccarelli M, Swigonski N. A continuous quality improvement curriculum for residents: Addressing core competency, improving systems. Acad Med. 2004;79(10 suppl):S65–S67.

27 Ogrinc G, West A, Eliassen M, Liuw S, Schiffman J, Cochran N. Integrating practice-based learning and improvement into medical student learning: Evaluating complex curricular innovations. Teach Learn Med. 2007;19:221–229.

28 Canal D, Torbeck L, Djuricich A. Practice-based learning and improvement: A curriculum in continuous quality improvement for surgery residents. Arch Surg. 2007;142:479–482.

29 O’Connell M, Rivo M, Mechaber A, Weiss B. A curriculum in systems-based care: Experiential learning changes in student knowledge and attitudes. Fam Med. 2004;36(suppl):S99–S104.

30 Landis S, Schwarz M, Curran D. North Carolina family medicine residency programs’ diabetes learning collaborative. Fam Med. 2006;38:190–195.

31 Oyler J, Vinci L, Arora V, Johnson J. Teaching internal medicine residents quality improvement techniques using the ABIM's practice improvement modules. J Gen Intern Med. 2008;23:927–930.

32 Frey K, Edwards F, Altman K, Spahr N, Gorman R. The ‘Collaborative Care’ curriculum: An educational model addressing key ACGME core competencies in primary care residency training. Med Educ. 2003;37:786–789.

33 Gould B, Grey M, Huntington C, et al. Improving patient care outcomes by teaching quality improvement to medical students in community-based practices. Acad Med. 2002;77:1011–1018.

34 Mohr J, Randolph G, Laughon M, Schaff E. Integrating improvement competencies into residency education: A pilot project from a pediatric continuity clinic. Ambul Pediatr. 2003;3:131–136.

35 Ellrodt A. Introduction of total quality management (TQM) into an internal medicine residency. Acad Med. 1993;68:817–823.

36 Coleman M, Nasraty S, Ostapchuk M, Wheeler S, Looney S, Rhodes S. Introducing practice-based learning and improvement ACGME core competencies into a family medicine residency curriculum. Jt Comm J Qual Saf. 2003;29:238–247.

37 Henley E. A quality improvement curriculum for medical students. Jt Comm J Qual Improv. 2002;28:42–48.

38 Austin B, Wagner E, Hindmarsh M, Davis C. Elements of effective chronic care: A model for optimizing outcomes for the chronically ill. Epilepsy Behav. 2000;1:S15–S20.

39 Morrison L, Headrick L, Ogrinc G, Foster T. The quality improvement knowledge application tool: An instrument to assess knowledge application in practice-based learning and improvement. J Gen Intern Med. 2003;18:250.

40 Association of American Medical Colleges: Medical School Objectives Project. Learning Objectives for Medical Student Education: Guidelines for Medical Schools. Available at: http://www.aamc.org/meded/msop/start.htm. Accessed August 19, 2009.

41 Reed D, Price E, Windish D, et al. Challenges in systematic reviews of educational intervention studies. Ann Intern Med. 2005;142(12 pt 2):1080–1089.

42 Green M. Identifying, appraising, and implementing medical education curricula: A guide for medical educators. Ann Intern Med. 2001;135:889–896.

43 Morrison J, Sullivan F, Murray E, Jolly B. Evidence-based education: Development of an instrument to critically appraise reports of educational interventions. Med Educ. 1999;33:890–893.

44 Davidoff F, Batalden P, Stevens D, Ogrinc G, Mooney S; SQUIRE Development Group. Publication guidelines for improvement studies in health care: Evolution of the SQUIRE Project. Ann Intern Med. 2008;149:670–676.

45 Ogrinc G, Mooney SE, Estrada C, et al. The SQUIRE (Standards for QUality Improvement Reporting Excellence) guidelines for quality improvement reporting: Explanation and elaboration. Qual Saf Health Care. 2008;17(suppl 1):i13–i32.

46 Batalden PB, Davidoff F. What is “quality improvement” and how can it transform healthcare? Qual Saf Health Care. 2007;16:2–3.

47 Prideaux D. Researching the outcomes of educational interventions: A matter of design. RTCs have important limitations in evaluation educational interventions. BMJ. 2002;324:126–127.

48 Ratanawongsa N, Thomas PA, Marinopoulos SS, et al. The reported validity and reliability of methods for evaluating continuing medical education: A systematic review. Acad Med. 2008;83:274–283.

49 Shaneyfelt T, Baum KD, Bell D, et al. Instruments for evaluating education in evidence-based practice: A systematic review. JAMA. 2006;296:1116–1127.

50 Varkey P, Natt N. The Objective Structured Clinical Examination as an educational tool in patient safety. Jt Comm J Qual Patient Saf. 2007;33:48–53.

51 Windish DM, Diener-West M. A clinician–educator's roadmap to choosing and interpreting statistical tests. J Gen Intern Med. 2006;21:656–660.

52 Prystowsky J, Bordage G. An outcomes research perspective on medical education: The predominance of trainee assessment and satisfaction. Med Educ. 2001;35:331–336.

53 Carney PA, Nierenberg DW, Pipas CF, Brooks WB, Stukel TA, Keller AM. Educational epidemiology: Applying population-based design and analytic approaches to study medical education. JAMA. 2004;292:1044–1050.

54 Whitcomb M. Research in medical education: What do we know about the link between what doctors are taught and what they do? Acad Med. 2002;77:1067–1068.

55 Carline J. Funding medical education research: Opportunities and issues. Acad Med. 2004;79:918–924.

Cited By:

This article has been cited 5 time(s).

Family Medicine
Evaluation of a Quality Improvement Curriculum for Family Medicine Residents
Tudiver, F; Click, IA; Ward, P; Basden, JA
Family Medicine, 45(1): 19-25.

Medical Education Online
An innovative quality improvement curriculum for third-year medical students
Levitt, DS; Hauer, KE; Poncelet, A; Mookherjee, S
Medical Education Online, 17(): -.
ARTN 18391
CrossRef
Bmc Medical Education
Patient safety and quality improvement education: a cross-sectional study of medical students' preferences and attitudes
Teigland, CL; Blasiak, RC; Wilson, LA; Hines, RE; Meyerhoff, KL; Viera, AJ
Bmc Medical Education, 13(): -.
ARTN 16
CrossRef
Implementation Science
Measuring organizational and individual factors thought to influence the success of quality improvement in primary care: a systematic review of instruments
Brennan, SE; Bosch, M; Buchan, H; Green, SE
Implementation Science, 7(): -.
ARTN 121
CrossRef
Family Medicine
Integrating Improvement Learning Into a Family Medicine Residency Curriculum
Pensa, M; Frew, P; Gelmon, SB
Family Medicine, 45(6): 409-416.

Back to Top | Article Outline

© 2009 Association of American Medical Colleges

Login

Article Tools

Images

Share