Early elective deliveries are scheduled deliveries that occur at less than 39 weeks of gestation without a medical indication. Since 1999, the American College of Obstetricians and Gynecologists has advised against early elective deliveries,1 yet data showed that 17% of neonates in the United States were born electively before 39 weeks of gestation in 2010.2 This gap between evidence and practice plus growing recognition of the burden of morbidity in early-term neonates3,4 prompted quality improvement initiatives to reduce early elective deliveries.
The Ohio Perinatal Quality Collaborative initiated a successful quality improvement effort to reduce early elective deliveries in 2008, working with 20 charter hospitals that were large, urban, academic institutions.5 In 2012, when asked by its key stakeholders to spread this successful initiative across the state, the Ohio Perinatal Quality Collaborative piloted an approach beginning with 15 community hospitals.6 Data collection in the pilot hospitals was necessarily limited by the absence of large residency or research programs whose personnel had provided hand-collected data in the charter hospitals. Although resources for hand-collected data were not available, the Ohio vital statistics birth registry was accessible at every maternity hospital. Given the widely acknowledged concerns regarding the accuracy and timeliness of birth registry data,7 it became clear that, if the Ohio vital statistics birth registry database was to be the sole data source for a statewide initiative to reduce early-term, nonmedically indicated deliveries, a parallel effort to improve birth registry accuracy would be needed. Therefore, the Ohio Department of Health Bureau of Vital Statistics and the Ohio Perinatal Quality Collaborative worked together using the dissemination of the original 39-week quality improvement initiative to pilot an integrated effort to improve the accuracy and timeliness of Ohio vital statistics birth registry data and improve clinical care using only Ohio birth registry data to track outcomes. Integration of these projects into a single coordinated effort facilitated the necessary collaboration between the birth registrar and clinical personnel that drove improved accuracy and timeliness of the birth registry data.6 With the success of the charter and pilot projects, the objective of this initiative was to evaluate success of a quality improvement initiative to extend these initiatives to all Ohio maternity hospitals rapidly and efficiently.
MATERIALS AND METHODS
During this initiative, 107 Ohio hospitals provided maternity services. Of these, 35 participated in our initial quality improvement projects, leaving 72 hospitals eligible to participate. Hospital leaders were sent an invitation letter signed by the Ohio Perinatal Quality Collaborative, Ohio Hospital Association, and Ohio Department of Health. The Chief of the Ohio Department of Health Bureau of Vital Statistics contacted hospital leaders and obstetric faculty contacted local clinical providers to encourage participation. Seventy of 72 eligible maternity hospitals agreed to participate. There was no financial compensation to hospitals for participation.
The quality improvement support program was designed to be implemented over 8 months. The program was led by a trained quality improvement consultant, a representative from the Ohio Department of Health, and obstetric faculty. Participating hospitals were asked to form a quality improvement team consisting of a physician (obstetric lead physician), nurse, and birth registry staff member. The collaborative did not provide incentives to quality improvement team members for quality improvement activities such as data collection or participation in webinars. Hospitals may have provided incentives, but this was not required for participation and use of local incentives was not tracked. Hospitals participated in the initiative only during their assigned wave. Key components of the program included:
- One-on-One Coaching Webinars. The quality improvement consultant met with each hospital team through webinars monthly over the first 3 months of the project to assist teams in completing process flow maps for birth registry data abstraction and for scheduling of deliveries and to identify opportunities for improvement. These webinars were also used as a chance to review national early elective delivery guidelines, examine local hospital policies, and promote available educational resources for pregnant women and health professionals. Another key activity involved teaching teams how to audit the accuracy of their birth registry data entered in the electronic birth registry system. During the pilot quality improvement initiative, these activities were conducted as part of a 4-hour in-person site visit. Teaching was performed in virtual meetings in this dissemination effort to reach a large number of hospitals in a short time.
- Face-to-Face Meeting. Members of the hospital teams met face to face at a learning session held in the fourth month of the initiative to build community and provide an opportunity for teams to share strategies for making improvements. In-person meetings allowed quality experts to provide training and tools and to support teams in redesigning their systems and in implementing practice changes.
- Monthly Group Webinars. Throughout the 8-month initiative, teams participated in monthly wave-specific group webinars to share ideas and track progress. Group webinars supported a learning collaborative model of “all teach, all learn.” Improvement strategies tested at individual sites were shared with all other participating sites to allow everyone to benefit from the learning occurring at individual hospitals. Birth registry data examining rates of early elective deliveries and audits of the accuracy of data entered in the electronic birth registry system were shared to monitor for improvement.
- Data Feedback. Hospitals were provided with quarterly statistical process control charts tracking their rates of nonmedically indicated inductions before 39 weeks of gestation using birth registry data. In addition to control charts, hospitals were provided with a quarterly list of deliveries that were identified as inductions without a medical indication based on birth registry data. This enabled hospitals to review potential failures to either correct the birth registry data (if it was an error) or to follow-up with clinical staff to understand the factors contributing to the inappropriately scheduled induction.
Based on lessons learned from the original Ohio Perinatal Quality Collaborative initiatives, hospitals were given a list of key interventions to reduce early elective labor inductions and cesarean deliveries and improve birth registry accuracy (Appendices 1 and 2, available online at http://links.lww.com/AOG/B72).5,6 Throughout the quality improvement initiative, teams were instructed to use the plan–do study–act approach to testing and implementing interventions.8
A stepped wedge design was used to evaluate the success of the quality improvement program used to spread the 39-week and birth registry accuracy quality improvement effort.9 Hospitals were randomly divided into three balanced waves based on key characteristics, including delivery volume (number of births), percentage of births covered by Medicaid, location (Ohio region), eligibility for an Ohio Hospital Association Hospital Engagement Network, and 2011 rate of early elective delivery from birth registry data. The three groups of hospitals participated in the initiative in waves, separated by 3 months. Wave 1 began in February 2013, wave 2 in May 2013, and wave 3 in August 2013. Data from the 9 months preceding the start of the initiative established the hospital's baseline. The 8-month implementation period was followed by a 6-month monitoring period to measure the hospital's ability to sustain change. A schematic of the stepped wedge design is provided in Table 1.
The evaluation was approved by the Cincinnati Children's Hospital Medical Center institutional review board. The Ohio Perinatal Quality Collaborative activities were reviewed separately and were determined to be exempt research. In addition, the Ohio Perinatal Quality Collaborative has approval from the Ohio Department of Health institutional review board to obtain vital statistics files to support improvement efforts.
The quality improvement strategies used to assist hospitals in reducing elective deliveries before 39 weeks of gestation were aimed at both scheduled inductions of labor and cesarean deliveries. However, the primary outcome measure was the rate of inductions at 37–38 weeks of gestation that lacked a valid medical indication as tracked in the Ohio birth registry. Birth registry variables used to determine whether a scheduled birth at 37–38 weeks of gestation had a valid medical indication were selected to mimic American College of Obstetricians and Gynecologists recommendations.5,10 This measure was limited to inductions because it was difficult to determine whether a cesarean delivery was elective using birth registry data.5 The decision to use birth registry data and, therefore, examine only scheduled inductions as the outcome for this project was deemed acceptable because 1) it would not influence the implementation of the initiative because process changes were aimed at all scheduled births regardless of mode of delivery, 2) the limited burden of data collection allowed nearly universal participation by all but two of the eligible maternity hospitals in Ohio, 3) there had been a concordant decline in early scheduled births in the hand-collected and birth registry data in the initial charter project,5 and 4) the scheduled induction metric had been used since 2006 and therefore provided a robust baseline.
To assess birth registry accuracy, during the last 6 months of the initiative, hospitals provided monthly reports of audits where they compared information found in a sample of charts with data recorded in the electronic birth registry system for six key variables: obstetric estimate of gestational age at delivery (based on the birth attendant's final estimate as determined by all perinatal factors, but not the neonatal examination), prepregnancy and gestational diabetes, prepregnancy and gestational hypertension, and induction of labor. Hospitals were initially instructed to sample five charts; however, in June 2013, they were instructed to increase their sampling to 10 charts to increase the likelihood that variables being audited would be found in the sample. This change occurred in month 5 of the implementation phase for wave 1, month 2 of the implementation phase for wave 2, and prior to implementation for wave 3. Hospitals could select their own charts for review to make the auditing process more manageable. Because chart selection occurred before reviewing the medical record or comparing with the electronic birth registry, we did not believe that hospitals were preferentially selecting charts with fewer errors. The accuracy measure was defined as the percentage of audited charts where the data in the electronic birth registry system matched the information found in the medical record for all of the variables reviewed.
Descriptive data on characteristics of the participating hospitals were obtained from the Ohio Department of Health Data Warehouse and Hospital Directory Report. Designation of a hospital location as rural, metropolitan, or micropolitan was based on U.S. Census Bureau criteria and the hospital address (and county) as listed in the Data Warehouse.11,12
Characteristics of the hospital participants in each wave were summarized using descriptive statistics. Fisher exact tests or Kruskal-Wallis nonparametric tests were used to test for significant differences across waves. For the primary outcome, we used generalized linear repeated-measures models that included hospital as a random effect. The baseline, implementation, and sustain phases were divided into 2- to 3-month periods (Table 1). The dependent variable was the percentage of births at 37–38 weeks of gestation without a medical indication by period. Data were transformed using arcsine square root to satisfy normality assumptions. Hospitals with less than five births in a period were excluded for that time period to avoid unstable estimates resulting from small sample sizes. Results are reported as estimates of the median percentage and 95% CIs obtained by back-transforming least square means and CI limits from the analysis model. Comparisons among phases of the project (eg, baseline, implementation, and sustain) were made within each wave. In addition, comparisons between waves participating and not participating in the initiative (eg, baseline period for those waves not yet participating) were made across waves within period 1 and period 2. This analysis allowed for a comparison of the effects of the initiative with a concurrent control group. To determine whether there was significant improvement in the birth registry accuracy data (percentage of audited charts that were 100% correct for the variables audited) from month 1 to month 6, within each wave, the Wilcoxon signed-rank nonparametric test for paired data was used. P<.05 was considered statistically significant. Statistical analyses were performed using SAS 12.1. Because the goal of this project was to include all interested, and eligible, Ohio maternity hospitals, a priori power calculations were not performed.
Seventy hospitals (97%), divided across three waves, participated. Waves were well balanced (Table 2). Participating hospitals tended to be smaller community hospitals with typical delivery volume of 500–600 births per year. Hospitals were mainly level 1 maternity centers.
There was a significant reduction in early-term labor inductions at 37 0/7 to 38 6/7 weeks of gestation without a medical indication from the baseline to implementation phases that was sustained in the 6 months after the initiative, as shown in Table 3. All three waves had statistically significant reductions from baseline to implementation. Improvements made in waves 1 and 3 during the implementation phase were sustained. Wave 2 had a low rate of early elective deliveries at baseline but still demonstrated an improvement.
The stepped wedge design allowed for comparisons between the waves actively participating in the initiative and the waves not participating at two points in time (periods 1 and 2), as shown in Figure 1. In period 1, wave 1 was in the initial months of implementation and waves 2 and 3 were in the baseline period (serving as controls). During period 1, wave 1 had a 1.54% absolute reduction in rates of nonmedically indicated early-term inductions; however, this reduction was not significant as compared with changes seen in waves 2 and 3, which had a reduction of 0.87% and increase of 0.24%, respectively. During period 2, waves 1 and 2 were participating in the initiative and only wave 3 remained as a control. The hospitals in the implementation phase (waves 1 and 2) saw significant decreases in the rate of inductions without a medical indication as compared with hospitals in the baseline phase (wave 3; P=.018), although the effect was primarily driven by reductions seen in wave 1 hospitals. At this point, wave 1 was completing the implementation period and achieved a further reduction in early-term inductions without a medical indication (3.44% reduction from baseline) that was statistically significant compared with changes in wave 3 (1.58% increase from baseline). Wave 2, a few months into the start of implementation, saw a 1.65% absolute reduction in rates of early-term inductions without a medical indication, but this reduction was not significant as compared with changes seen in wave 3.
As shown in Table 4, all waves had significant improvement in birth registry accuracy by the end of the initiative (wave 1: 80–90%, P=.017; wave 2: 80–100%, P=.002; wave 3: 75–100%, P<.001). The median percentage accuracy combined across waves was 100% at month 6 of the initiative, achieving the project goal of greater than 95% accuracy.
A quality improvement initiative to assist hospitals in reducing early elective deliveries and improving birth registry accuracy was associated with reduced rates of nonmedically indicated early-term inductions and significant improvements in birth registry accuracy. The initiative was delivered rapidly and at scale, affecting 70 hospitals over 14 months. The initiative was based on successful strategies used in the original Ohio Perinatal Quality Collaborative initiatives,5,6 although the timeline was compressed and staffing streamlined. Core components of the initiative included engagement of hospital leaders and strong partnerships with key state stakeholders.
Similar efforts to reduce early elective deliveries have been reported by single hospitals,13,14 integrated hospital systems,15,16 and across 26 hospitals in five states.17 Because the interventions that are effective at reducing early elective deliveries have been well characterized, the most pressing challenge is how to achieve improvement at scale. The effort reported here has addressed this challenge by achieving population-wide improvement at a state level using a systematic quality improvement approach. Since this project was conducted, there has been tremendous growth in state-based perinatal improvement collaboratives.18–20 The program examined in this evaluation may be useful to state-based perinatal quality collaboratives aiming to achieve population-level improvement.
A novel aspect of this work was application of quality improvement methodology to target improvement in clinical processes (eg, early elective deliveries) and improvements in birth registry accuracy. Historically, there have been significant concerns about the accuracy and timeliness of information recorded in administrative or population health data sets.7 As a result of the increasing number of quality metrics hospitals are required to report and the large number of quality improvement initiatives occurring at an individual hospital, data collection burden is a significant concern.21 Using public health surveillance data in improvement initiatives decreases the data collection burden and allows a true population focus. Parallel efforts to improve birth registry accuracy in perinatal quality improvement efforts may be a useful model for other states and hospitals. Future efforts to document improvements in birth registry accuracy would be improved by considering an independent measure of accuracy as opposed to hospital self-evaluation as used in this project.
This initiative has some limitations. Because this effort involved improving the data collection source (birth registry) at the same time as that source was used to determine the rate of early elective deliveries, distinguishing the proportion of the reductions in nonmedically indicated inductions at less than 39 weeks of gestation related to accuracy improvement compared with clinical process change is difficult.10 Because we do not have information on whether accuracy improvements resulted from correction of over- or undercoding of delivery indications, there is a possibility that, if a majority of the improvements in birth registry accuracy occurred by correcting undercoding (eg, improving coding of indications for delivery), much of the effects we have seen could have resulted from improved birth registry accuracy as opposed to clinical process change. However, this was not the case when this project was conducted in the Collaborative's initial project in 20 larger hospitals.10 Moreover, we did not measure fetal and neonatal outcomes. Therefore, we cannot be sure that changes made by hospitals did not inadvertently result in harm resulting from inappropriate delays of medically indicated deliveries.13,22 Lastly, there were strong national trends concurrent with this project making it difficult to determine the degree to which an external trend or any other factors contributed to the observed decline in Ohio. However, the step wedge design provides a concurrent control group that allows testing for differences in the presence of time trends. Our analyses suggest that the decrease in early elective deliveries seen among participating hospitals was associated with the initiative.
Although the same quality improvement support program was used across all participating hospitals, each hospital could tailor their approach to implementing changes based on their unique local context and implementation barriers. A qualitative process evaluation, conducted in parallel with this project, was designed to enable a deeper appreciation of how hospitals chose to vary the interventions and what barriers were faced by hospitals with respect to implementation. Including a process evaluation as part of dissemination studies is critical to understanding the quality of implementation and the contextual factors associated with variation in outcomes.23
We have demonstrated success of a quality improvement initiative designed to facilitate change at scale. The success of the approach used in this initiative could support a model whereby perinatal state collaboratives develop and test change strategies in a small group of hospitals and then rapidly spread to other hospitals as a means to achieve population improvement.
1. Induction of labor. ACOG Practice Bulletin No. 107. American College of Obstetricians and Gynecologists. Obstet Gynecol 2009;114:386–97.
3. Clark SL, Miller DD, Belfort MA, Dildy GA, Frye DK, Meyers JA. Neonatal and maternal outcomes associated with elective term delivery. Am J Obstet Gynecol 2009;200:156.e1–4.
4. Tita AT, Landon MB, Spong CY, Lai Y, Leveno KJ, Varner MW, et al. Timing of elective repeat cesarean delivery at term and neonatal outcomes. N Engl J Med 2009;360:111–20.
5. Donovan EF, Lannon C, Bailit J, Rose B, Iams JD, Byczkowski T, et al. A statewide initiative to reduce inappropriate scheduled births at 36 (0/7)–38 (6/7) weeks' gestation. Am J Obstet Gynecol 2010;202:243.e1–8.
6. Lannon C, Kaplan HC, Friar K, Fuller S, Ford S, White B, et al. Using a state birth registry as a quality improvement tool. Am J Perinatol 2017;34:958–65.
7. Lain SJ, Hadfield RM, Raynes-Greenow CH, Ford JB, Mealing NM, Algert CS, et al. Quality of data in perinatal population health databases: a systematic review. Med Care 2012;50:e7–20.
8. Langley GL, Nolan KM, Nolan TW, Norman CL, Provost LP. The improvement guide: a practical approach to enhancing organizational performance. 1st ed. San Francisco (CA): Jossey-Bass; 1996.
9. Hussey MA, Hughes JP. Design and analysis of stepped wedge cluster randomized trials. Contemp Clin Trials 2007;28:182–91.
10. Bailit JL, Iams J, Silber A, Krew M, McKenna D, Marcotte M, et al. Changes in the indications for scheduled births to reduce nonmedically indicated deliveries occurring before 39 weeks of gestation. Obstet Gynecol 2012;120:241–5.
11. Office of Management and Budget. 2010 standards for delineating metropolitan and micropolitan statistical areas, notice. Federal Register 2010;75:37245–52.
13. Ehrenthal DB, Hoffman MK, Jiang X, Ostrum G. Neonatal outcomes after implementation of guidelines limiting elective delivery before 39 weeks of gestation. Obstet Gynecol 2011;118:1047–55.
14. Fisch JM, English D, Pedaline S, Brooks K, Simhan HN. Labor induction process improvement: a patient quality-of-care initiative. Obstet Gynecol 2009;113:797–803.
15. Clark SL, Frye DR, Meyers JA, Belfort MA, Dildy GA, Kofford S, et al. Reduction in elective delivery at <39 weeks of gestation: comparative effectiveness of 3 approaches to change and the impact on neonatal intensive care admission and stillbirth. Am J Obstet Gynecol 2010;203:449.e1–6.
16. Oshiro BT, Henry E, Wilson J, Branch DW, Varner MW; Women and Newborn Clinical Integration Program. Decreasing elective deliveries before 39 weeks of gestation in an integrated health care system. Obstet Gynecol 2009;113:804–11.
17. Oshiro BT, Kowalewski L, Sappenfield W, Alter CC, Bettegowda VR, Russell R, et al. A multistate quality improvement program to decrease elective deliveries before 39 weeks of gestation. Obstet Gynecol 2013;121:1025–31.
18. Transforming Maternity Care Symposium Steering Committee, Angood PB, Armstrong EM, Ashton D, Burstin H, Corry MP, et al. Blueprint for action: steps toward a high-quality, high-value maternity care system. Womens Health Issues 2010;20(suppl):S18–49.
19. Louis JM. Promise and challenges of maternal health collaboratives. Clin Obstet Gynecol 2015;58:362–9.
21. Institute of Medicine. Vital signs: core metrics for health and health care progress. Washington, DC: The National Academies Press; 2015.
22. Little SE, Zera CA, Clapp MA, Wilkins-Haug L, Robinson JN. A multi-state analysis of early-term delivery trends and the association with term stillbirth. Obstet Gynecol 2015;126:1138–45.
23. Moore GF, Audrey S, Barker M, Bond L, Bonell C, Hardeman W, et al. Process evaluation of complex interventions: medical research council guidance. BMJ 2015;350:h1258.