Efforts to improve mortality rates by increasing access to care throughout Mexico have resulted in approximately 95% of pregnant women receiving skilled prenatal and delivery care.1,2 Consequently, most maternal deaths now occur within health facilities in Mexico, mainly due to hypertensive disorders of pregnancy (31.6%) and hemorrhage (26.1%).3 Most maternal deaths are preventable yet continue to occur in part because of poor quality and timing of obstetric care and lack of access to emergency services.3,4 Neonatal death is closely linked to the care provided during childbirth, with more than 25% of all neonatal deaths occurring within 24 hours of delivery.5,6 Although neonatal and maternal mortality have decreased in Mexico for the last 10 years, the country is not “on track” to achieve its goals of a two-thirds reduction in child mortality and progress toward reducing maternal mortality by three-quarters has slowed.7–9 Evidence suggests that strengthening obstetric and neonatal emergency care is needed to impact maternal and neonatal health outcomes.10–13
Continued clinical training is an important element of quality improvement, and simulation-based training in particular is an effective strategy for training teams to respond to obstetric and neonatal emergencies.12,14,15 Simulation-based obstetric emergency trainings emphasizing teamwork and communication have demonstrated improved provider confidence and competence as well as patient outcomes in resource-stable countries.16–18 In fact, the Joint Commission and the American College of Obstetricians and Gynecologists recommend using emergency drills and simulation to improve teamwork and communication for obstetric emergencies.19 However, the materials used in many simulation programs are expensive and out-of-reach for the settings that could most benefit.20 Acknowledging the potential for low-cost yet realistic simulation in resource-limited settings, PRONTO (Programa de Rescate Obstétrico y Neonatal: Tratamiento Óptimo y Oportuno), a simulation-based program, provides high-fidelity, low-cost training for interprofessional obstetric and neonatal teams.21
PRONTO simulation training was developed and piloted in Mexico in 2009 in response to a need for affordable, appropriate, and effective obstetric and neonatal emergency training.21 The impact of the course on patient outcomes, however, remains unevaluated. We hypothesize that PRONTO improves the obstetric and neonatal outcomes of participating hospitals. The objective of this study was to evaluate the impact of PRONTO on hospital-level patient outcomes in Mexico.
We set out to conduct a pair-matched cluster randomized controlled trial of PRONTO training at hospitals in Mexico. Hospitals were eligible for participation if they were government-run level 2 or 3 facilities with between 500 and 3000 annual deliveries in Chiapas, Guerrero, or Mexico State. We excluded level 1 facilities, which were only equipped for vaginal deliveries. Public hospitals in the states of Chiapas, Guerrero, and Mexico State were considered because the State Ministries for Women is committed to providing funding and in-kind support for the trial and the State Secretaries of Health approved of the trial. To begin, only facilities on a list of those with higher than average incidence of maternal death provided by the National Center for Gender Equity and Reproductive Health where considered for inclusion. All eligible hospitals were selected from the list (n = 24) and matched according to birth volume, cesarean cases, mortality, complications, personnel resources, and number of operating rooms using data obtained from 2008 Ministry of Health (MOH) records and randomly allocated to the intervention or control arm (Fig. 1). Before the start of baseline data collection, however, 11 of the 24 selected hospitals were unable to participate. Hospitals were unavailable for a variety of reasons including restructuring, remodeling, and natural disaster damage (earthquake and flood). Because all initially eligible hospitals had been selected from the high maternal mortality list, replacement hospitals were selected from the 60 eligible clinics not on the original list. The 11 dropout hospitals were replaced by 11 eligible public hospitals most closely meeting the outlined matching criteria before training initiation. When dropout hospitals were replaced, the replacement hospital was allocated to the opposite study arm of the remaining hospital from the matched pair. If a replacement hospital was matched with another replacement hospital, they were randomly allocated to intervention or control, except for 2 pairs of hospitals in Mexico State, in which the member of the pair that was to receive the intervention was discretionally chosen by the local MOH. Overall, 24 hospitals were then included in the hospital-based controlled study: 12 in the intervention arm and 12 in the control arm. Trained field workers visited each of the 24 hospitals before the intervention to collect baseline facility inventory and epidemiologic data and returned quarterly to collect primary outcome data. Data collection began in August 2010 and follow-up concluded in March 2013.
PRONTO training sessions have minimal traditional didactic content and are composed primarily of interactive team and communication exercises, targeted skill-building sessions, simulation of obstetric and neonatal emergencies, video-guided debriefings, and strategic planning. The simulations use the low-technology hybrid birth simulator PartoPants, which are worn by a patient actress to simulate the obstetric emergency and the Laredal NeoNatalie for the neonatal resuscitation scenarios. The PartoPants were designed specifically for PRONTO trainings and are modified surgical scrub pants with a vaginal opening and pocket for an IV bag with tubing for simulated blood.22 A cloth doll attached to a fake placenta is used to simulate the neonate during delivery. Simulations are conducted in situ at participating facilities and use only the resources that are normally available. Training Module I, conducted for the course of 2 days, covers the topics of teamwork, communication, obstetric hemorrhage, and neonatal resuscitation. Participants receive a 1-day Module II training, which incorporates shoulder dystocia, preeclampsia, and eclampsia, 2 to 3 months after Module I. The teamwork and communication concepts were adapted from the Team Strategies and Tools to Enhance Performance and Patient Safety (TeamSTEPPS™) training program and CAPE 10 Key Behavioral Skills (crew-resource management).17,23 The obstetric and neonatal emergency management curriculum is based on World Health Organization standards in maternal and newborn care, Mexican national guidelines for obstetric care, and healthcare simulation and debriefing best practices.24–26
Hospitals received PRONTO training between August 2010 and January 2012. Participants at intervention hospitals were medical personnel who cared for women and babies during labor and birth or immediately postpartum. Hospital leadership provided final selection of participants, which sometimes included nursing students, interns, or residents on rotation. Individual participation was voluntary and no compensation was provided. Between 6.4% and 31.6% of eligible medical personnel at each facility were trained, with a mean participation rate of 20.5%; overall, 450 participants of 3228 eligible personnel in all 12 hospitals participated in the training. The state level authorities from the MOH provided the necessary permissions for the start of the study and data collection, and the Ethics and Research Committees at the National Institute of Public Health of Mexico provided approval on August 2, 2010 (Reference 845). The hospitals allocated to the control group received no intervention but were able to apply for funding to receive the PRONTO training after the study’s conclusion. There was no blinding in this study because providers knew whether they were receiving training, and training logistics and posttraining PRONTO algorithms were regularly posted by trainees in intervention hospitals. The data analysts were also not blinded to the study assignments. The trial protocol is registered at www.clinicaltrials.gov (NCT01477554).
Our primary intended outcome of interest was perinatal mortality at 12-month follow-up, and secondary outcomes were death rate from obstetric hemorrhage, death rate from preeclampsia/eclampsia, and maternal complications. However, as is the case in many resource limited settings, we were unable to acquire the data as planned given poor reporting of still births and rarity of maternal deaths. Because of these limitations, the outcome variables were adapted to most closely represent our initial intentions outlined per protocol. Hospital-based neonatal mortality was selected as the main outcome of interest, which we defined as the neonatal deaths occurring in the study hospitals from birth until discharge of infant delivered alive. We also measured the impact of the intervention on hospital-level maternal complications including obstetric hemorrhage, hysterectomy, and eclampsia. A composite maternal complication variable was defined as the sum of cases of obstetric hysterectomy, obstetric hemorrhage, and maternal deaths. As an additional exploratory analysis, proposed by the research team after the main protocol had been approved but before the start of the main data analyses, we assessed the impact of PRONTO on mode of delivery (cesarean vs. vaginal delivery); this was decided because there is interest by the Mexican authorities to lower the large proportion of births by cesarean sections in the country.
Trained fieldworkers collected data at the hospital level from existing hospital records and facility inventories. We identified hospital-based neonatal mortality and morbidity cases from the nurse registries in the delivery rooms, operating rooms, and neonatal intensive care units (NICUs). Fieldworkers verified inconsistencies in the medical records or death certificate. When the information was not available from the hospital registries, data were collected from the epidemiology and statistics department. All outcomes were collected for the 3 months before each data collection visit at 4, 8, and 12 months postintervention when field workers traveled to hospitals (instead of quarterly at 3, 6, 9, and 12 months as planned because of logistic issues) and only filled out information for the previous 3 months as the form specified. Given the unintended schedule change of the data collectors, for 12 months of follow-up, there were data for only 9 months (with data gaps for months zero, 4, and 8 of the follow-up period, Fig. 2).
The unit of analysis was the hospital, at each of the time points. To evaluate group comparability at baseline, paired t tests and McNemar exact tests were performed on continuous and dichotomous variables, respectively. In the case of baseline comparability of the outcome variables, we used negative binomial regression with fixed effects at the matched pair level. Although our original choice of statistical model was Poisson, which is the most often used approach for count variables, models showed evidence of overdispersion (excess variance). As an alternative approach, we decided to fit negative binomial regression, which takes overdispersion into account and thus yields adequate standard errors of the estimates.27 For our primary analysis, we fitted a mixed-effects (random intercept and fixed slopes) negative binomial regression model for each outcome measure for the available data accumulated up to 6 and 12 months of follow-up with total number of live births included in the models as the exposure. Mixed-effects models take into account the within-hospital correlation structure due to the repeated measurements.28 Incidence rate ratios (IRRs) were estimated, using a difference-in-differences (DID) approach, which is done including time-by-treatment interaction terms in the models; the IRR of the interaction term is the estimator of the impact of the intervention at each time point, and it represents the ratio of the IRR at time t to the IRR at baseline; we refer to it as IRRDID throughout the article (it can be interpreted as the IRR at time t adjusted for baseline imbalances in the outcome). 29–31 A more detailed explanation of the model can be found in the online Appendix, http://links.lww.com/SIH/A268.
The matched design of the study was taken into account by including dummy variables of the matched pairs as covariates. Because of the matched design, the randomization and the DID analytic strategy are all controlling for potential confounding, we decided not to adjust for covariates in the analysis because these adjustments might be unnecessary and consume degrees of freedom (of particular concern due to the small sample size), which in turn would increase the imprecision of our estimates. However, we did adjust for the presence of a NICU in the models for neonatal mortality and cesarean section because these outcomes could be much related to this particular variable and also because of it being unbalanced between treatment groups at baseline.
As a more relevant analytic strategy after the unintended data collection protocol deviations, we fitted negative binomial regression models to estimate the impact of the intervention at 4-, 8-, and 12-month follow-up noncumulatively with a DID approach; this strategy allows us to see trends and possible waning of the impact. We included the number of events that occurred 3 months before the date of data collection for each outcome, time by treatment interaction terms, and the number of live births in the models. Given the gaps in the follow-up data and potential for slow uptake of PRONTO concepts and waning of effects, breaking the outcomes into concrete periods of follow-up was deemed most reasonable. As in the per-protocol analysis, models for cesarean section and neonatal mortality were adjusted for the presence of a NICU. To assess whether nonrandom treatment allocations were introducing bias in the estimates, we also fitted models excluding the 4 replacement hospitals in Mexico State, but otherwise following the same approach. For all tests and models, statistical significance was set at the 0.05 level, and analyses were performed with Stata v. 12.0 (StataCorp. 2011; StataCorp LP, College Station, TX).
A total of 450 providers at the 12 intervention facilities participated in Module I and/or Module II of PRONTO training, with 305 completing both Modules. Overall, this corresponded to, on average, 20.5% of eligible providers at intervention sites. During study follow-up, there were 50,589 live births, including 17 maternal and 480 neonatal deaths in the 24 hospitals. The intervention and control facility baseline resources, as well as average numbers of maternal and neonatal outcomes, were compared at baseline (Table 1). The most notable difference between groups at baseline was the presence of a NICU: 41.7% of intervention facilities had one compared with 83.3% of control facilities. The number of cases of obstetric hemorrhages was notably larger in the treated group (19.3 cases in average) than in the control group (8.2 cases). None of the comparisons yielded a P value less than 0.05. Raw incidence rates of outcomes at all time points are presented in Table 2.
The main results of the analysis showed that although the incidence of complications tended to be lower in the intervention group after study completion, neonatal mortality, maternal complications, obstetric hemorrhage, eclampsia, and hysterectomy incidence did not differ significantly overall at 6- or 12-month follow-up between the intervention and control hospitals (Table 3). However, the incidence of cesarean deliveries was significantly lower in the intervention group at 12-month follow-up [IRRDID = 0.79; 95% confidence interval (CI), 0.67–0.93] but only marginally significant after the initial 6 months of follow-up (IRRDID = 0.81; 95% CI, 0.64–1.01).
The results of the noncumulative analysis show that 3-month interval results were similar to the cumulative results for obstetric hemorrhage, maternal complications, and obstetric hysterectomy (Table 4), with only incidence of eclampsia being marginally lower in the intervention group at 12-month posttraining (IRRDID = 0.32; 95% CI, 0.09–1.11). Incidence of hospital-based neonatal mortality tended to be lower in the intervention group relative to controls, and at the 8-month interval posttraining, it was significantly lower by 40% after adjustment for baseline differences (IRRDID = 0.60; 95% CI, 0.37–0.95), although it was not significantly different at neither 4 nor 12 months postintervention. Cesarean delivery incidence was significantly lower in the intervention group at all time intervals; 5 (IRRDID = 0.84; 95% CI, 0.72–0.97), 8 (IRRDID = 0.80; 95% CI, 0.68–0.93), and 12 (IRRDID = 0.80; 95% CI, 0.68–0.93) months of postintervention. The analysis excluding the nonrandomized sites in Mexico State produced similar results for both analyses; in particular, the impact on the cesarean section rates was significant at all follow-up times (Tables 3, 4). All impact estimates were controlled for baseline differences in outcomes by the DID approach.
With very few exceptions, almost all analyses of all outcomes and time points showed a lower incidence of postpartum complications for intervention hospitals than for control hospitals after adjustment for baseline differences and covariates. However, these improvements were, in general, not statistically significant except for cesarean delivery, which consistently showed a 18% to 21% incidence reduction throughout the follow-up period. The lack of statistical significance of impact estimates for most outcomes is likely due to the small sample size, which only allowed for large effect sizes to be detected.
A range of implementation challenges faced throughout the trial are worth recognizing and highlighting, especially for others attempting this type of impact evaluation, as well.
First and foremost was the forced change in study design, which occurred when some hospitals withdrew from the study for unforeseen events such as floods, earthquakes, and remodeling followed by the inability to randomize replacement hospitals in Mexico State. These events were out of our control and have moved us to attempt a stepped-wedge design in our follow-up work. Second, data collection in resource-limited settings that relies on existing hospital registries and records can lead to unreliable data when identifying cases; for example, differentiating neonatal death versus stillbirth and accurately identifying maternal hemorrhage and other maternal complications are difficult. In addition, because the study hospitals were second level, we fear that a nontrivial proportion of complicated cases might have been referred to third level units for treatment, thus lowering our incidence estimates and increasing the uncertainty in our impact estimations. Contrary to what one might expect, supplies, venue, and participant interest were not major hurdles. Using our transportable low-technology simulation kit and prioritizing in situ simulation with locally available supplies keep costs low and set-up simple. Trainers must be flexible on some aspects of the training while maintaining fidelity to the curriculum as a whole. Frequently, simulation setups had to be delayed or rapidly taken down for actual patient care. Provider apathy or disinterest was not a problem. Initial recruitment was met with responses such as “Oh no, not another emergency training.” However, once the training had begun, participants’ comments in postevaluations were in general very positive because they enjoyed simulations and dynamics instead of only having presentations as in traditional trainings. Additional providers routinely tried to join the training, but we were only able to train on average 21% of potential participants because our budget only allowed for 1 training per site, and there was no crosscoverage for ongoing clinical care. It is likely that the effect would have been greater had we been able to train a greater proportion of providers at each site.
Our exploratory analysis of cesarean deliveries, however, is especially interesting and promising given that incidence of cesarean deliveries was significantly lower in the intervention group at all postintervention time points for all analyses, except at 6-month cumulative. The decreased incidence of cesarean in the intervention group could be attributable to enhanced provider confidence and ability to handle obstetric emergencies, allowing them to permit more women to progress normally, as well as to an emphasis on humanized birth practices during training. Although we did not plan our study to assess this variable specifically, it validates much of the intent of the training and merits further research especially in Mexico, a country with consistently high and rising rates of cesarean deliveries.32
These results complement process indicator findings, which concluded that PRONTO training positively impacts trained providers’ knowledge and self-efficacy and promotes goal achievement.33 Other studies in low-resource settings have also found that provider simulation-based training can positively impact maternal and/or neonatal health outcomes.34,35 For example, the Helping Babies Breath curriculum in Tanzania, which uses low-fidelity simulations, found a significant decrease in early neonatal mortality after training in neonatal resuscitation.14 To date, however, PRONTO training represents the only evaluated and operational training program that combines high-fidelity in-situ simulation, interprofessional team-training, and care of the maternal-neonatal dyad in resource-limited settings.
Strengths and Limitations
This study is strengthened by its inclusion of a control group, randomized assignment, and multiple time-point measurements. There are several limitations that should be noted in addition to the challenges identified previously that affected our ability to conduct the study and analyze the data per protocol. There were missing data for all clinics in the first-, fifth-, and ninth-month postintervention. The gaps in data prevent us from performing the initially planned analyses of all 12 months of follow-up data. The data, however, are missing equally for each site, and thus should not introduce additional bias, but rather lower the power to detect impacts. Although this study was initially intended to be a pair-matched cluster randomized trial, after initial selection, matching, and randomization, 11 of the 24 clinics were unable to participate for a variety of reasons previously described. The Mexico State hospitals were allocated by the MOH and the criteria used to assign the sites are not well defined, so we cannot fully account for the potential bias introduced. In a comparison of the 2008 MOH baseline data used for initial selection and matching, dropout hospitals were similar to retained hospitals with regard to staffing and maternal and neonatal morbidity and mortality measures. Facility infrastructure was similar, except that dropout hospitals were more likely to have a NICU and clinical laboratory. Despite this, results excluding the nonrandomized hospitals were similar to those of the complete sample. Large differences between groups in the value of some outcomes at baseline could potentially threaten the validity of our inferences; however, we believe that the potential bias was limited through the DID approach in the analyses, which is devised to account for the possible time-invariant observable and unobservable differences between both study arms.
Our study is limited because the outcome variables were measured at the hospital level, and in general, we did not collect individual level information of particular cases. In particular, this might be a problem for neonatal mortality estimates because we were not able to apply exclusion criteria such as prematurity, twin birth, congenital anomalies, etc. However, the lack of exclusion criteria was equally true for both study arms, and thus, we would not expect this limitation to bias the result. Moreover, the intervention’s distance to outcome variables in the causal chain is variable and in several cases the link is indirect; true impacts might in that case be modest in size and thus not captured by the study design (the study only had enough statistical power for rather large impacts).
Given the nature of the intervention, blinding was not possible, and though unlikely, this may have biased the results. Training rolled out for a 2-year perio and did not include other variables that can influence the quality of care and the intervention, such as patient characteristics and staffing changes. Also, there is no way to know whether training participant characteristics or the training setting differed between groups. With the matched and quasirandomized nature of our intervention, we hope the distribution of these unmeasurable covariates to be similar in both groups. Groups were nonsignificantly different at baseline in terms of infrastructure aside from the control group having significantly more institutions with a NICU, which prompted us to include this covariate in neonatal mortality and cesarean section models.
Our findings provide a proof of concept for a simulation training of interprofessional teams in the care of the mother-neonatal dyad that can impact outcomes and practices in a resource limited setting. Although the 17 maternal deaths that occurred during the course of the study in these institutions were too few to measure impact on this ultimate outcome, they do illustrate the urgency to improve the quality of emergency response. The same training principles and strategies that have been demonstrated to be effective to optimize care during childbirth in high-resource settings can be adapted and effective in improving care in resource-limited settings.34–36 In fact, most personnel who received the training recommended in course evaluations to extend the training to their colleagues, other institutions, and other settings, which suggests acceptability and perceived value of this type of training.
Further improvement and retention of outcomes are important next challenges. To address this, the PRONTO team is working to increase the proportion of providers trained at each site to assure a change in institutional practice culture. We have also developed simulation tools and kits to allow local teams to continue to run simulation scenarios. PRONTO implementation trials are underway in Guatemalan and Kenyan primary level clinics to assess the setting and cultural adaptability of the program using a modified stepped-wedge study design. We are also exploring the impact of this high-fidelity simulation experience on providers’ tendencies to offer kind and dignified care to women in labor.
3. Murrugat Mendoza N. Monitoreo Ciudadano de la Politica Publica Federal para Reducir la Morbimortalidad Materna en Mexico
. Mexico City, Mexico: Foro Nacional de Mujeres y Politicas de Poblacion; 2006: 50.
5. Lawn JE, Cousens S, Zupan J; Lancet Neonatal Survival Steering Team. 4 million neonatal deaths: when? Where? Why? Lancet
2005; 365: 891–900.
6. Bhutta Z, Cabral S, Chan CW, Keenan WJ. Reducing maternal, newborn, and infant mortality globally: an integrated action agenda. Int J Gynaecol Obstet
2012; 119(Suppl 1): S13–S17.
9. Lozano R, Wang H, Foreman KJ, et al. Progress towards Millennium Development Goals 4 and 5 on maternal and child mortality: an updated systematic analysis. Lancet
2011; 78(9797): 1139–1165.
10. Campbell O, Graham W; Lancet Maternal Survival Series steering group. Strategies for reducing maternal mortality: getting on with what works. Lancet
2006; 368(9543): 1284–1299.
11. Darmstadt GL, Bhutta ZA, Cousens S, et al. Evidence-based, cost-effective interventions: how many newborn babies can we save? Lancet
2005; 365: 977–988.
12. Koblinsky M, Matthews Z, Hussein J, et al. Going to scale with professional skilled care. Lancet
2006; 368(9544): 1377–1386.
13. Souza JP, Gülmezoglu AM, Vogel J, et al. Moving beyond essential interventions for reduction of maternal mortality (the WHO Multicountry Survey on Maternal and Newborn Health): a cross-sectional study. Lancet
2013; 381(9879): 1747–1755.
14. Crofts JF, Winter C, Sowter MC. Practical simulation training for maternity care—where we are and where next. BJOG
2011; 118(Suppl 3): 11–16.
15. Deering S, Rowland J. Obstetric emergency simulation. Semin Perinatol
2013; 37(3): 179–188.
16. Robertson B, Schumacher L, Gosman G, Kanfer R, Kelley M, DeVita M. Simulation-based crisis team training for multidisciplinary obstetric providers. Simul Healthc
2009; 4(2): 77–83.
17. Robertson B, Kaplan B, Atallah H, Higgins M, Lewitt MJ, Ander DS. The use of simulation and a modified TeamSTEPPS curriculum for medical and nursing student team training. Simul Healthc
2010; 5(6): 332–337.
18. Reynolds A, Ayres-de-Campos D, Lobo M. Self-perceived impact of simulation-based training on the management of real-life obstetrical emergencies. Eur J Obstet Gynecol Reprod Biol
2011; 159(1): 72–76.
19. American College of Obstetricians and Gynecologists Committee on Patient Safety and Quality Improvement. Committee opinion no. 590: preparing for clinical emergencies in obstetrics and gynecology. Obstet Gynecol
2014; 123: 722–725.
20. Kawaguchi A, Mori R. The In-Service Training for Health Professionals to Improve Care of the Seriously Ill Newborn or Child in Low- and Middle-Income Countries
. Geneva, Switzerland: The WHO Reproductive Health Library; 2010.
21. Walker DM, Cohen SR, Estrada F, et al. PRONTO training for obstetric and neonatal emergencies in Mexico. Int J Gynaecol Obstet
2012; 116(2): 128–133.
22. Cohen SR, Cragin L, Rizk M, Hanberg A, Walker D. PartoPantsTM: The high-fidelity, low-tech birth simulator. Clin Simul Nurs
2011; 7(1): e11–e18.
26. Issenberg SB, McGaghie WC, Petrusa ER, Lee Gordon D, Scalese RJ. Features and uses of high-fidelity medical simulations that lead to effective learning: a BEME systematic review. Med Teach
2005; 27(1): 10–28.
27. Hilbe JM. Negative Binomial Regression
. 2nd ed. Cambridge, UK: Cambridge University Press; 2011.
28. Diggle PJ, Liang KY, Zeger SL. Analysis of longitudinal data
. Oxford: Oxford University Press; 1994.
29. Angrist J, Krueger A. Empirical strategies in labor economics. In: Ashenfelter O, Card D, eds. Handbook of Labor Economics
. Elsevier; 1999: 98–107.
30. Lance P, Guilkey D, Hattori A, Angeles G. How do we know if a program made a difference? A guide to statistical methods for program impact evaluation
. Chapel Hill, North Carolina: MEASURE Evaluation, 2014: 186–201.
31. Dimick JB, Ryan AM. Methods for evaluating changes in health care policy: the difference-in-differences approach. JAMA
2014; 312(22): 2401–2402.
32. Suárez-López L, Campero L, De la Vara-Salazar E, et al. Sociodemographic and reproductive characteristics associated with the increase of cesarean section practice in Mexico [in Spanish]. Salud Publica Mex
2013; 55(Suppl 2): S225–S234.
33. Walker D, Cohen S, Fritz J, et al. Team training in obstetric and neonatal emergencies using highly realistic simulation in Mexico: impact on process indicators. BMC Pregnancy Childbirth
2014; 14: 367.
34. Hofmeyr GJ, Haws RA, Bergström S, et al. Obstetric care in low-resource settings: what, who, and how to overcome challenges to scale up? Int J Gynaecol Obstet
2009; 107(Suppl 1): S21–S44, S44–5.
35. Wall SN, Lee AC, Niermeyer S, et al. Neonatal resuscitation in low-resource settings: what, who, and how to overcome challenges to scale up? Int J Gynaecol Obstet
2009; 107(Suppl 1): S47–S62.
36. Siassakos D, Crofts JF, Winter C, Weiner CP, Draycott TJ. The active components of effective training in obstetric emergencies. BJOG
2009; 116(8): 1028–1032.
Patient simulations; Neonatal mortality; Obstetric labor complications; Program evaluation
Supplemental Digital Content
© 2016 Society for Simulation in Healthcare