Secondary Logo

Journal Logo

The Association Between Learning Climate and Adverse Obstetrical Outcomes in 16 Nontertiary Obstetrics–Gynecology Departments in the Netherlands

Smirnova, Alina MD; Ravelli, Anita C.J. PhD; Stalmeijer, Renée E. PhD; Arah, Onyebuchi A. MD, PhD; Heineman, Maas Jan MD, PhD; van der Vleuten, Cees P.M. PhD; van der Post, Joris A.M. MD, PhD; Lombarts, Kiki M.J.M.H. PhD

doi: 10.1097/ACM.0000000000001964
Research Reports
Free
SDC

Purpose To investigate the association between learning climate and adverse perinatal and maternal outcomes in obstetrics–gynecology departments.

Method The authors analyzed 23,629 births and 103 learning climate evaluations from 16 nontertiary obstetrics–gynecology departments in the Netherlands in 2013. Multilevel logistic regressions were used to calculate the odds of adverse perinatal and maternal outcomes, by learning climate score tertile, adjusting for maternal and department characteristics. Adverse perinatal outcomes included fetal or early neonatal mortality, five-minute Apgar score < 7, or neonatal intensive care unit admission for ≥ 24 hours. Adverse maternal outcomes included postpartum hemorrhage and/or transfusion, death, uterine rupture, or third- or fourth-degree perineal laceration. Bias analyses were conducted to quantify the sensitivity of the results to uncontrolled confounding and selection bias.

Results Learning climate scores were significantly associated with increased odds of adverse perinatal outcomes (aOR 2.06, 95% CI 1.14–3.72). Compared with the lowest tertile, departments in the middle tertile had 46% greater odds of adverse perinatal outcomes (aOR 1.46, 95% CI 1.09–1.94); departments in the highest tertile had 69% greater odds (aOR 1.69, 95% CI 1.24–2.30). Learning climate was not associated with adverse maternal outcomes (middle vs. lowest tertile: OR 1.04, 95% CI 0.93–1.16; highest vs. lowest tertile: OR 0.98, 95% CI 0.88–1.10).

Conclusions Learning climate was associated with significantly increased odds of adverse perinatal, but not maternal, outcomes. Research in similar clinical contexts is needed to replicate these findings and explore potential mechanisms behind these associations.

A. Smirnova is a PhD candidate, School of Health Professions Education, Department of Educational Development and Research, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands, and researcher, Professional Performance Research Group, Institute for Education and Training, Academic Medical Center, Amsterdam, The Netherlands.

A.C.J. Ravelli is epidemiologist and assistant professor, Departments of Medical Informatics and Obstetrics and Gynecology, Academic Medical Center, Amsterdam, The Netherlands.

R.E. Stalmeijer is assistant professor, Department of Educational Development and Research, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands.

O.A. Arah is professor, Department of Epidemiology, Fielding School of Public Health, University of California, Los Angeles, Los Angeles, California.

M.J. Heineman is professor, Board of Directors, Academic Medical Center, Amsterdam, The Netherlands.

C.P.M. van der Vleuten is professor and scientific director, School of Health Professions Education, Department of Educational Development and Research, Faculty of Health, Medicine, and Life Sciences, Maastricht University, Maastricht, The Netherlands.

J.A.M. van der Post is professor, Department of Obstetrics and Gynecology, Academic Medical Center, Amsterdam, The Netherlands.

K.M.J.M.H. Lombarts is professor, Professional Performance Research Group, Institute for Education and Training, Academic Medical Center, Amsterdam, The Netherlands.

Funding/Support: None reported.

Other disclosures: None reported.

Ethical approval: The institutional ethical review board of the Academic Medical Center of the University of Amsterdam, the Netherlands, confirmed that the Medical Research Involving Human Subjects Act did not apply to this study because the study involved the use of existing data collected for quality improvement purposes and provided a waiver of ethical approval for the overall study design (W14_065 #14.17.0090) on March 12, 2014.

Previous presentations: Preliminary results from this study were presented at the Rogano research meeting in Glasgow, Scotland, on September 4–5, 2015; as a SHEILA workshop in Glasgow, Scotland, on September 6, 2015; and at the Association for Medical Education in Europe conference in Barcelona, Spain, on August 29, 2016.

Supplemental digital content for this article is available at http://links.lww.com/ACADMED/A491.

Correspondence should be addressed to Alina Smirnova, Professional Performance Research Group, Academic Medical Center, PO Box 22660, Amsterdam 1100DD, The Netherlands; telephone: (+31) 20-566-1273; e-mail: a.smirnova@amc.uva.nl; Twitter: @asmirnova7.

The primary aim of residency is to shape high-performing physicians in a clinical learning environment that ensures safe and effective care for patients and high-quality learning, safety, and well-being for residents.1 Asch and colleagues2 found a correlation between the location of physicians’ obstetrics–gynecology residency training and the quality of care they deliver as specialists after graduation, though initial performance and years of experience also contributed to the safety of the care they provided after training.3 Similarly, location of residency training was associated with graduates’ future ability to practice conservatively4 and with their spending patterns.5 As a result, efforts to improve the quality of graduate medical education training have focused on improving the clinical learning environment for residents while simultaneously ensuring the safety of the care delivered to patients.6,7

Recent efforts to optimize the clinical learning environment for education and patient care, including duty hours reform efforts by the Accreditation Council for Graduate Medical Education and more recently the Clinical Learning Environment Review program,8 have increasingly incorporated residents’ perceptions of their learning environments.9–11 In the Netherlands, quality improvement efforts in graduate medical education have focused specifically on improving learning climates for residents.12,13 The learning climate incorporates residents’ shared perceptions of their learning environment, including formal and informal aspects of their training, as well as the relevant policies, practices, and procedures that affect that learning environment.14,15 Previous studies found associations between residents’ positive perceptions of their learning environment and higher performance on in-training examinations,16 better quality of life, fewer symptoms of burnout,17 and a lower likelihood of leaving practice.18 In addition to contributing to residents’ professional development, the learning climate also can contribute to patient safety. In particular, residents’ perceptions of a supportive and judgment-free environment can reduce medical errors, as residents are more likely to consult an on-call physician for assistance.19 Moreover, residents who perceive low levels of support in their learning environment are less likely to report medical errors, which can potentially affect patient outcomes.20 Because these studies did not investigate actual patient care outcomes, the effects of the learning climate on patients can only be implied.

Although the clinical learning environment should foster both effective resident learning and safe patient care, no study to date has linked residency learning climate to patient outcomes. To better support residency programs in integrating high-quality residency training with patient safety, a better understanding of how the learning climate relates to patient outcomes in individual departments is needed. The purpose of this study was to investigate the association between the learning climate in obstetrics–gynecology departments and adverse perinatal and maternal outcomes.

Back to Top | Article Outline

Method

Setting and design

This study was set in obstetrics–gynecology departments in community teaching hospitals in the Netherlands. Community teaching hospitals are university-affiliated, nontertiary perinatal centers providing obstetrical care to women with lower-risk pregnancies who do not meet the criteria for transfer to a tertiary perinatal center. They also provide (1) training and supervision for interns and residents in their first two years of graduate medical education, during which time trainees are expected to learn low-risk obstetrics and basic surgery skills, as well as (2) one- to two-year differentiation rotations for residents in the last two years of training.21 We chose this setting because residents at community teaching hospitals perform low-risk deliveries in a relatively young and healthy patient population, and also because previous research has attributed the variation in labor outcomes in part to the variation in obstetrical care delivery and organization.22–24

We retrospectively analyzed learning climate data from the Dutch Residency Educational Climate Test (D-RECT) and clinical registration data from the Netherlands Perinatal Registry (PRN), which is a national database that combines anonymized pooled data about the mother, pregnancy, childbirth, child, and process of obstetrical care for 96% of all births in the Netherlands.25 To ensure the largest sample possible, we studied data from the year in which the most departments evaluated their learning climate (January–December 2013). On the basis of the existing literature, we hypothesized that a more positive learning climate would be associated with lower odds of adverse obstetrical outcomes in obstetrics–gynecology departments in community teaching hospitals.

Back to Top | Article Outline

Data collection

Trainees (residents, fellows, and interns) were invited to evaluate the learning climate of their most recent obstetrics–gynecology department rotation using either an online or a paper-based D-RECT questionnaire, depending on hospital policy. We obtained permission to analyze anonymized D-RECT survey results from the participating departments that used the Web-based D-RECT platform and from the regional educational committees for those departments that used the paper-based D-RECT questionnaire. Departments using the online questionnaire invited trainees using an e-mail with a link to an anonymous questionnaire, sending up to three reminders to nonrespondents. Trainees usually had one month to complete the questionnaire. Because trainees spend one to two years in a single department, most completed the questionnaire during the rotation that they were evaluating. The four departments that used the paper-based questionnaire made it available in their offices, asking trainees to evaluate the department where they had spent the most time in the last three months, which may not have been their current department. In all cases, trainees were ensured that their participation was voluntary and their responses anonymous.

We accessed anonymized patient data with special permission from the PRN (PRN-14.20). We excluded any deliveries meeting the criteria for transfer to a tertiary center (i.e., < 32 weeks gestational age, weight < 1,250 grams at birth, or serious maternal complications). Additionally, we excluded any births with congenital abnormalities due to increased risk of perinatal mortality.22

The institutional ethical review board of the Academic Medical Center of the University of Amsterdam confirmed that the Medical Research Involving Human Subjects Act did not apply to this study because the study involved the use of existing data collected for quality improvement purposes and provided a waiver of ethical approval for the overall study design.

Back to Top | Article Outline

Measures

Residency learning climate.

The D-RECT questionnaire was developed in the Netherlands based on qualitative research, expert opinion, and a Delphi panel.26 It has since been extensively validated nationally27 and increasingly used internationally.28 The questionnaire consists of 35 items that assess nine domains of the residency learning climate: educational atmosphere, accessibility of supervisors, coaching and assessment, teamwork, role of the specialty tutor, formal education, peer collaboration, adaptation of work to residents’ competence, and patient sign-out (see Supplemental Digital Appendix 1, Table A1.1 at http://links.lww.com/ACADMED/A491 for more about the D-RECT questionnaire).27 Responses are given on a five-point Likert scale (1 = totally disagree, 2 = disagree, 3 = neutral, 4 = agree, 5 = totally agree) with an additional option of “cannot evaluate.” Higher scores indicate a more favorable learning climate. On the basis of previous large-scale validation studies, we included departments with at least three completed D-RECT questionnaires to ensure reliable overall learning climate scores.26,27

Back to Top | Article Outline

Adverse obstetrical outcomes.

We divided obstetrical outcomes into perinatal outcomes and maternal outcomes. In the absence of a perinatal adverse outcomes index, which is still in development for the Netherlands, we used quality measures for inpatient obstetrics to define the relevant outcomes.29 Adverse perinatal outcomes then included any combination of the following: fetal and early neonatal mortality < 7 days after birth, five-minute Apgar score < 7, or admission of the child to a neonatal intensive care unit in a tertiary center for ≥ 24 hours. Adverse maternal outcomes included postpartum hemorrhage ≥ 1,000 ml and/or the need for a transfusion during or shortly after delivery, maternal death, uterine rupture, or third- or fourth-degree perineal laceration.

Back to Top | Article Outline

Covariates.

We accounted for any known or potential covariates and confounders. Known covariates included maternal age,22 parity (nulliparous/multiparous),22 ethnicity (Western/non-Western),22,24 multiple gestations (multiple births/singletons), and maternal socioeconomic status.30 We used socioeconomic status scores that were calculated by the Netherlands Institute for Social Research in 2006 using four-digit postal codes, which were then linked to the mean income level and percentage of households with low income or high unemployment in that area. Each score was categorized as low (< 25th percentile), middle (25th–75th percentile), or high (> 75th percentile).31

In addition to these known covariates, we also included two potential confounders: previous D-RECT evaluations and hospital volume. The number of previous D-RECT evaluations (1, 2–3, ≥ 4) refers to the number of times the department had evaluated its learning climate in the past. It was included because learning climate scores tend to improve over time.13 Hospital volume was included because it had been associated with delivery complication rates32,33 and because it can affect the learning climate. Hospital volume was defined as the total number of deliveries at ≥ 22 weeks of gestation in the department in the study year, categorized in tertiles as low (< 1,500 deliveries), average (1,500–1,750 deliveries), or high (> 1,750 deliveries).

Back to Top | Article Outline

Statistical and bias analyses

In total, we included 16 nontertiary obstetrics–gynecology departments that evaluated their learning climate between January and December 2013 in our study. We retrieved adverse outcomes data from the PRN for these departments for the same period (23,629 births). We retrieved responses to the D-RECT questionnaires for the selected obstetrics–gynecology departments from a larger data set containing 559 evaluations of obstetrics–gynecology departments between 2009 and 2014. These data had been cleaned and imputed using an expectation maximization algorithm because the proportion of missing data (< 5%) was considered ignorable.34

We determined each department’s learning climate score by calculating the average D-RECT score for that department. To assess the reliability of the scores, we calculated the intraclass correlation coefficient (2, k), which estimates the reliability of multiple measurements of a group mean based on average group size (k) and can be interpreted as the proportion of the total variance in mean scores that can be explained by the differences between the departments.35

Because the included adverse outcomes were infrequent, we grouped departments in tertiles based on their overall learning climate scores and used multilevel logistic regressions to test whether the odds of adverse maternal and perinatal outcomes differed between the three groups. We chose multilevel logistic regression because it accounts for the clustering of patients within departments and minimizes between-department differences due to unmeasured department- or hospital-level characteristics.36,37

To assess the effects of the covariates, we built three separate models. In the first model, we estimated the odds of adverse perinatal and maternal outcomes given the department’s learning climate score tertile, without adjusting for clustering or other factors. In the second model, we used a random intercept multilevel binary logistic model that controlled for maternal or pregnancy characteristics. In the final model, we added hospital volume and previous D-RECT evaluations as covariates to the second model. We calculated unadjusted and adjusted odds ratios (ORs and aORs, respectively) and their 95% confidence intervals (CIs).

We conducted sensitivity analyses to test the robustness of our results against the possible effects of grouping the variables differently and of selecting a different study population. First, we tested whether grouping the scores in quartiles instead of tertiles affected our results. Second, we hypothesized that a department’s learning climate could affect only the deliveries performed by residents. We therefore repeated the analysis for only the deliveries performed by residents. Next, to minimize possible bias from our sampling strategy, we conducted subgroup analyses excluding multiple births and fetal deaths before the onset of labor (antepartum death). Finally, because we could not possibly account for all potential confounders, we conducted a bias analysis to quantify whether part or all of our results could be explained by uncontrolled confounding. Bias analysis is an innovative tool for adjusting observational study data for potential residual confounding due to unknown or unmeasured confounders.38 We also investigated whether varying responses to the D-RECT questionnaire introduced selection bias and affected our results. We conducted the additional selection bias analysis on top of the uncontrolled confounding analysis as part of a multiple-bias modeling exercise to check the sensitivity of our results.39 We implemented the bias adjustment using Monte Carlo methods.38 See Supplemental Digital Appendix 2 at http://links.lww.com/ACADMED/A491 for more about these additional analyses.

We used IBM SPSS Statistics for Windows, version 21 (IBM Corp., Armonk, New York) for imputing the missing D-RECT data. Multilevel logistic regressions were performed using the mle4 package in R statistical software version 3.2.0. The remaining analyses were performed using SAS version 9.4 (SAS Institute, Cary, North Carolina).

Back to Top | Article Outline

Results

Of 171 potential respondents (82% female), 103 (85% female) completed a D-RECT questionnaire (60% response rate, mean 6.4 questionnaires per department). The four departments using paper-based questionnaires allowed their trainees to evaluate the rotation where they spent the most time in the last three months, potentially giving these departments a response rate greater than 100%. Therefore, we estimated the overall response rate based on responses in departments using the online questionnaire (12 departments). The departments’ learning climate scores ranged from 3.7 to 4.4 (mean 4.0, standard deviation 0.3) (see Figure 1). The intraclass correlation coefficient was 0.69, which means that 69% of the variance in departments’ overall learning climate scores can be explained by differences between the departments. Table 1 includes the characteristics of the departments by learning climate score tertile, and Table 2 presents the included perinatal and maternal characteristics as well as the adverse outcomes, also by learning climate score tertile.

Table 1

Table 1

Table 2

Table 2

Figure 1

Figure 1

The lowest percentage of adverse perinatal outcomes (1.5%) was found in departments in the lowest learning climate score tertile, while the highest percentage of adverse perinatal outcomes (2.4%) was found in departments in the highest tertile. In the multilevel logistic regression analyses using overall learning climate score as a continuous variable, higher learning climate scores were associated with significantly greater odds of an adverse perinatal outcome (aOR 2.06, 95% CI 1.14–3.72) (see Table 3). Compared with departments in the lowest tertile, departments in the middle tertile had 46% greater odds of an adverse perinatal outcome (aOR 1.46, 95% CI 1.09–1.94), while departments in the highest tertile had 69% greater odds (aOR 1.69, 95% CI 1.24–2.30) (see Table 4). Covariates that significantly contributed to adverse perinatal outcomes included multiple births, nulliparity, maternal age, and socioeconomic status. Neither previous number of D-RECT evaluations nor hospital volume significantly contributed to adverse perinatal outcomes.

Table 3

Table 3

Table 4

Table 4

The highest percentage of deliveries with an adverse maternal outcome (9.2%) was found in departments in the middle tertile of learning climate scores. However, in multilevel logistic regression analyses, we did not find a significant association between learning climate score and odds of an adverse maternal outcome (middle vs. lowest tertile: 9.2% vs. 8.9%; OR 1.04, 95% CI 0.93–1.16; highest vs. lowest tertile: 8.7% vs. 8.9%; OR 0.98, 95% CI 0.88–1.10).

The results of the sensitivity and bias analyses are reported in Supplemental Digital Appendix 1 and 2, respectively, which are available at http://links.lww.com/ACADMED/A491. In the sensitivity analyses, the protective effect of lower learning climate scores against adverse perinatal outcomes remained when D-RECT scores were grouped into quartiles, with the two highest quartiles having significantly greater odds of an adverse perinatal outcome and no difference in the odds of an adverse maternal outcome (see Supplemental Digital Appendix 1, Table A1.2). The pattern did not change when we performed the subgroup analyses (in tertiles) of deliveries performed by residents only, nor when we performed the subgroup analyses of deliveries excluding multiple births and stillbirths (see Supplemental Digital Appendix 1, Tables A1.3–A1.5).

The results of the bias analysis showed that controlling for an unmeasured confounder set that increased the learning climate score but decreased the odds of adverse outcomes (or vice versa) would increase the strength of the observed positive association between learning climate and adverse perinatal outcomes (see Supplemental Digital Appendix 2, Figures A2.1 and A2.2). Controlling for an unmeasured confounder set that positively (or negatively) affected both learning climate scores and the odds of adverse perinatal outcomes would result in a less positive or even reversed association, implying that higher learning climate scores could be associated with fewer adverse perinatal outcomes (see Supplemental Digital Appendix 2, Figures A2.3 and A2.4). The multiple-bias analysis that also adjusted for selection bias (due to biasing response rates) revealed similar conclusions (see Supplemental Digital Appendix 2, Figures A2.5–A2.8).

Back to Top | Article Outline

Discussion

Explanation of findings

Our study is the first to our knowledge to investigate the association between residency learning climate and adverse perinatal and maternal outcomes. We found that departments’ learning climate scores are associated with increased odds of adverse perinatal outcomes but have no association with the odds of adverse maternal outcomes.

In their commentary on evaluating residency programs on the basis of patient outcomes, Asch and colleagues40 posited that the strength of the association between residency programs and patient outcomes may vary across medical conditions. In other words, a program may provide excellent care for one condition but not for another. Our study supports this argument, as we observed differences in the associations between learning climate and maternal and perinatal outcomes. Herein, our choice of outcomes measures guided the associations we observed. By studying the odds of any predefined adverse maternal or perinatal outcome, instead of the odds of specific outcomes, we could have masked existing associations with specific complications. In our sample, for instance, as the learning climate score tertile increased, the percentage of severe perineal lacerations decreased, while the percentage of severe maternal blood loss cases increased. As a result, these mixed trends could have diluted the overall association we observed.

On the other hand, the association between learning climate score and adverse perinatal outcomes seemed to be driven solely by low Apgar scores, as fetal or early neonatal mortality and neonatal intensive care unit admissions were extremely rare. In term infants, a low five-minute Apgar score (< 7) is related to a higher risk of death,41 neurological disability including cerebral palsy, and long-term cognitive impairment in adulthood.42,43 While the etiology of a low five-minute Apgar score is multifactorial, in term infants, low Apgar scores have been associated with high-risk interventions during labor44 as well as with the quality of neonatal resuscitation.42 The difference in adverse perinatal outcomes between the highest and lowest learning climate score tertiles was 0.9% in absolute terms, which is a small, but significant, difference in potentially preventable perinatal complications, especially when considered over time and across multiple departments.

The association between residency learning climate and adverse perinatal outcomes conflicted with our initial hypothesis and with findings from previous studies linking residents’ perceptions with an increased likelihood to consult with a physician on call19 or to report medical errors.20 Notably, previous studies focused on individual learning climate perceptions, while our study focused on the learning climate as a department-level characteristic. Our results echoed those from previous studies that found weak and even negative associations between measures of quality on a department or hospital level and individual patient outcomes, even when there was a demonstrable positive association at the individual level.45–47

According to Finney and colleagues,47 aggregated variables reflect department-level characteristics, which in turn are influenced by a different set of confounders than individual-level variables. Analyzing the subgroup of deliveries performed by residents only did not change our results, which reinforces our view that the overall learning climate is not just a measure of individual residents’ participation in deliveries but also reflects attributes of the whole department. Hence, the positive association we observed between a department’s overall learning climate and the odds of adverse perinatal outcomes is likely explained by department processes that can affect both patient safety and the learning climate in the department. Clinical environments are not specifically designed to facilitate learning and face unique challenges in balancing the demands of patient care and those of resident training.48,49 It is, therefore, possible that department processes that simultaneously contribute to the learning climate and patient outcomes compete for the department’s (limited) resources, potentially leading to tensions between the two functions.

Among the covariates we included in our analysis, maternal age, nulliparity, multiple births, and lower socioeconomic status were significant, which corresponded to previous findings in the literature.22,24,30 Number of previous D-RECT evaluations and hospital volume were not significant; however, we kept them in the model because they were considered potential confounding factors. Our bias analyses showed that uncontrolled confounding could strengthen or weaken the observed associations depending on the direction and strength of the relationship between the unmeasured confounder and the learning climate and adverse perinatal outcome (see Supplemental Digital Appendix 2 at http://links.lww.com/ACADMED/A491 for details). In the bias analyses, including the selection bias analyses, we found that this relationship would be weakened if the unmeasured confounder set simultaneously increased (or decreased) the learning climate score (a desirable result) and the odds of an adverse outcome (an undesirable result). However, we cannot identify a confounder that would cause such a change in practice.

Back to Top | Article Outline

Limitations

Given the observational nature of our study, we cannot make any claims of causality. While we have taken great care to identify relevant outcomes using a high-quality national database and a validated instrument for measuring the learning climate to maximize validity, by using data from quality improvement databases, we could not account for all potential sources of bias, including measurement error. Because this work is a hypothesis-generating study, our results may not be generalizable to other time periods, clinical outcomes, or clinical contexts. In particular, this study did not include academic hospitals, which may better integrate learners in clinical care. Furthermore, because of the deidentified nature of the patient data and anonymized D-RECT questionnaires, we could not study how individual residents’ perceptions of their learning climate, or their characteristics such as performance and previous experience, may have affected patient outcomes. Therefore, we could not compare our results with those from other studies examining outcomes of care delivered by individual residents or obstetricians.2,3,50,51

Back to Top | Article Outline

Future research

Future research is needed to confirm our findings in other clinical contexts, including academic and nonacademic hospitals, as well as to explore the relationship between resident- and department-level learning climate characteristics and other metrics of quality and safety, such as patient safety climate. Learning climate measurements should be standardized, and efforts should optimize response rates to increase the validity of results and enable the analysis of learning climate subscales. In addition, the moderating role of learning climate strength (the agreement between residents on given scores) should be investigated. Investigators also could focus on exploring potential effect mechanisms, modifiers, and other explanations, by better characterizing lower- and higher-scoring learning environments that systematically report better perinatal outcomes. Particularly, the effects of contributing factors, such as residents’ experience, composition of the resident group, supervisory activities, and the role of supervisors’ site of training, should be investigated. Assessing the perceptions of formal educators, staff, and program directors could provide additional information about the clinical learning environment as it relates to the residency learning climate as well as to perinatal outcomes.52 Longitudinal studies should specifically explore the long-term outcomes of physicians trained in different learning climates. Finally, extensive (multiple-) bias analysis to detect possible systematic error due to uncontrolled confounding, selection bias, and measurement error should be part of future studies that are large enough to make random sampling error negligible.39,53–55

Back to Top | Article Outline

Conclusions

We found a positive association between learning climate and the odds of adverse perinatal, but not maternal, outcomes. These findings should be considered when optimizing clinical learning environments—for example, by emphasizing patient safety and quality improvement in graduate medical education. Research in similar clinical contexts involving informed covariates and examining department-specific effect modification is needed to replicate our findings and to explore potential mechanisms behind the associations between learning climate and patient outcomes in clinical teaching departments.

Back to Top | Article Outline

Acknowledgments:

The authors wish to thank Dr. M.H. Hof, PhD, Academic Medical Center, Amsterdam, the Netherlands, for his assistance in using the R software for the multilevel logistic regressions. They would like to thank all Dutch midwives, obstetricians, neonatologists, and other perinatal health care providers for registering perinatal information in the Netherlands Perinatal Registry, as well as the trainees who completed the Dutch Residency Educational Climate Test questionnaires. The authors also would like to thank the Foundation of the Netherlands Perinatal Registry for permission to use the registry data. None of these individuals or entities received financial or other compensation for their contributions.

Back to Top | Article Outline

References

1. Leach DC, Philibert I. High-quality learning for high-quality health care: Getting it right. JAMA. 2006;296:11321134.
2. Asch DA, Nicholson S, Srinivas S, Herrin J, Epstein AJ. Evaluating obstetrical residency programs using patient outcomes. JAMA. 2009;302:12771283.
3. Epstein AJ, Srinivas SK, Nicholson S, Herrin J, Asch DA. Association between physicians’ experience after training and maternal obstetrical outcomes: Cohort study. BMJ. 2013;346:f1596.
4. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174:16401648.
5. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312:23852393.
6. Weiss KB, Bagian JP, Nasca TJ. The clinical learning environment: The foundation of graduate medical education. JAMA. 2013;309:16871688.
7. Nasca TJ, Weiss KB, Bagian JP. Improving clinical learning environments for tomorrow’s physicians. N Engl J Med. 2014;370:991993.
8. Weiss KB, Wagner R, Nasca TJ. Development, testing, and implementation of the ACGME Clinical Learning Environment Review (CLER) program. J Grad Med Educ. 2012;4:396398.
9. Borman KR, Jones AT, Shea JA. Duty hours, quality of care, and patient safety: General surgery resident perceptions. J Am Coll Surg. 2012;215:7077.
10. Holt KD, Miller RS, Philibert I, Heard JK, Nasca TJ. Residents’ perspectives on the learning environment: Data from the Accreditation Council for Graduate Medical Education resident survey. Acad Med. 2010;85:512518.
11. Philibert I. Satisfiers and hygiene factors: Residents’ perceptions of strengths and limitations of their learning environment. J Grad Med Educ. 2012;4:122127.
12. Directive of the Central College of Medical Specialists [in Dutch]. 2009.Utrecht, the Netherlands: Royal Dutch Medical Association.
13. Silkens ME, Arah OA, Scherpbier AJ, Heineman MJ, Lombarts KM. Focus on quality: Investigating residents’ learning climate perceptions. PLoS One. 2016;11:e0147108.
14. Roff S, McAleer S. What is educational climate? Med Teach. 2001;23:333334.
15. Genn JM. AMEE medical education guide no. 23 (Part 2): Curriculum, environment, climate, quality and change in medical education—A unifying perspective. Med Teach. 2001;23:445454.
16. Shimizu T, Tsugawa Y, Tanoue Y, et al. The hospital educational environment and performance of residents in the General Medicine In-Training Examination: A multicenter study in Japan. Int J Gen Med. 2013;6:637640.
17. van Vendeloo SN, Brand PL, Verheyen CC. Burnout and quality of life among orthopaedic trainees in a modern educational programme: Importance of the learning climate. Bone Joint J. 2014;96-B:11331138.
18. Degen C, Weigl M, Glaser J, Li J, Angerer P. The impact of training and working conditions on junior doctors’ intention to leave clinical practice. BMC Med Educ. 2014;14:119.
19. Naveh E, Katz-Navon T, Stern Z. Resident physicians’ clinical training and error rate: The roles of autonomy, consultation, and familiarity with the literature. Adv Health Sci Educ Theory Pract. 2015;20:5971.
20. General Medical Council. National Training Survey 2014: Concerns About Patient Safety. 2014.Manchester, UK: General Medical Council.
21. Scheele F, Caccia N, van Luijk S, den Rooyen C, van Loon K. Better Education for Obstetrics and Gynecology: Dutch National Competency Based Curriculum for Obstetrics & Gynaecology (NL). 2013.Utrecht, the Netherlands: Dutch Association for Obstetrics and Gynecology.
22. Poeran J, Borsboom GJ, de Graaf JP, Birnie E, Steegers EA, Bonsel GJ. Population attributable risks of patient, child and organizational risk factors for perinatal mortality in hospital births. Matern Child Health J. 2015;19:764775.
23. de Graaf JP, Ravelli AC, Visser GH, et al. Increased adverse perinatal outcome of hospital delivery at night. BJOG. 2010;117:10981107.
24. Vos AA, Denktaş S, Borsboom GJ, Bonsel GJ, Steegers EA. Differences in perinatal morbidity and mortality on the neighbourhood level in Dutch municipalities: A population based cohort study. BMC Pregnancy Childbirth. 2015;15:201.
25. The Netherlands Perinatal Registry Trends 1999–2012 [in Dutch]. 2013.Utrecht, the Netherlands: Foundation of the Netherlands Perinatal Registry.
26. Boor K, Van Der Vleuten C, Teunissen P, Scherpbier A, Scheele F. Development and analysis of D-RECT, an instrument measuring residents’ learning climate. Med Teach. 2011;33:820827.
27. Silkens ME, Smirnova A, Stalmeijer RE, et al. Revisiting the D-RECT tool: Validation of an instrument measuring residents’ learning climate perceptions. Med Teach. 2016;38:476481.
28. Piek J, Bossart M, Boor K, et al. The work place educational climate in gynecological oncology fellowships across Europe: The impact of accreditation. Int J Gynecol Cancer. 2015;25:180190.
29. Bailit JL. Measuring the quality of inpatient obstetrical care. Obstet Gynecol Surv. 2007;62:207213.
30. Ravelli AC, Jager KJ, de Groot MH, et al. Travel time from home to hospital and adverse perinatal outcomes in women at term in the Netherlands. BJOG. 2011;118:457465.
31. de Graaf JP, Ravelli AC, de Haan MA, Steegers EA, Bonsel GJ. Living in deprived urban districts increases perinatal health inequalities. J Matern Fetal Neonatal Med. 2013;26:473481.
32. Kyser KL, Lu X, Santillan DA, et al. The association between hospital obstetrical volume and maternal postpartum complications. Am J Obstet Gynecol. 2012;207:42.e41e17.
33. Sebastião YV, Womack LS, López Castillo H, et al. Hospital variations in unexpected complications among term newborns. Pediatrics. 2017;139(3):e20162364.
34. Dong Y, Peng CY. Principled missing data methods for researchers. Springerplus. 2013;2:222.
35. Bliese PD. Klein KJ, Kozlowski SWJ. Within-group agreement, non-independence, and reliability: Implications for data aggregation and analysis. In: Multilevel Theory, Research, and Methods in Organizations: Foundations, Extensions, and New Directions. 2000.San Francisco, CA: Jossey-Bass Inc..
36. Brumback BA, Dailey AB, Brumback LC, Livingston MD, He Z. Adjusting for confounding by cluster using generalized linear mixed models. Stat Probab Lett. 2010;80:16501654.
37. Gelman A, Hill J. Data Analysis Using Regression and Multilevel/Hierarchical Models. 2007.Cambridge, UK: Cambridge University Press.
38. Arah OA, Chiba Y, Greenland S. Bias formulas for external adjustment and sensitivity analysis of unmeasured confounders. Ann Epidemiol. 2008;18:637646.
39. Thompson CA, Arah OA. Selection bias modeling using observed data augmented with imputed record-level probabilities. Ann Epidemiol. 2014;24:747753.
40. Asch DA, Epstein A, Nicholson S. Evaluating medical training programs by the quality of care delivered by their alumni. JAMA. 2007;298:10491051.
41. Casey BM, McIntire DD, Leveno KJ. The continuing value of the Apgar score for the assessment of newborn infants. N Engl J Med. 2001;344:467471.
42. Ehrenstein V. Association of Apgar scores with death and neurologic disability. Clin Epidemiol. 2009;1:4553.
43. Ehrenstein V, Pedersen L, Grijota M, Nielsen GL, Rothman KJ, Sørensen HT. Association of Apgar score at five minutes with long-term neurologic disability and cognitive function in a prevalence study of Danish conscripts. BMC Pregnancy Childbirth. 2009;9:14.
44. Lai S, Flatley C, Kumar S. Perinatal risk factors for low and moderate five-minute Apgar scores at term. Eur J Obstet Gynecol Reprod Biol. 2017;210:251256.
45. Werner RM, Bradlow ET. Relationship between Medicare’s hospital compare performance measures and mortality rates. JAMA. 2006;296:26942702.
46. Bradley EH, Herrin J, Elbel B, et al. Hospital quality for acute myocardial infarction: Correlation among process measures and relationship with short-term mortality. JAMA. 2006;296:7278.
47. Finney JW, Humphreys K, Kivlahan DR, Harris AH. Why health care process performance measures can have different relationships to outcomes for patients and hospitals: Understanding the ecological fallacy. Am J Public Health. 2011;101:16351642.
48. Hoffman KG, Donaldson JF. Contextual tensions of the clinical environment and their influence on teaching and learning. Med Educ. 2004;38:448454.
49. Dornan T. Workplace learning. Perspect Med Educ. 2012;1:1523.
50. Aiken CE, Aiken AR, Park H, Brockelsby JC, Prentice A. Factors associated with adverse clinical outcomes among obstetrics trainees. Med Educ. 2015;49:674683.
51. van der Leeuw RM, Lombarts KM, Arah OA, Heineman MJ. A systematic review of the effects of residency training on patient outcomes. BMC Med. 2012;10:65.
52. Roth LM, Severson RK, Probst JC, et al. Exploring physician and staff perceptions of the learning environment in ambulatory residency clinics. Fam Med. 2006;38:177184.
53. Rothman KJ, Lash TL, Greenland S. Modern Epidemiology. 2008.3rd ed. Philadelphia, PA: Lippincott Williams & Wilkins.
54. Arah OA, Sudan M, Olsen J, Kheifets L. Marginal structural models, doubly robust estimation, and bias analysis in perinatal and paediatric epidemiology. Paediatr Perinat Epidemiol. 2013;27:263265.
55. Arah OA. Bias analysis for uncontrolled confounding in the health sciences. Annu Rev Public Health. 2017;38:2338.

Supplemental Digital Content

Back to Top | Article Outline
Copyright © 2017 by the Association of American Medical Colleges