Secondary Logo

Journal Logo

Institution and Specialty Contribute to Resident Satisfaction With Their Learning Environment and Workload

Gruppen, Larry D. PhD; Stansfield, R. Brent PhD; Zhao, Zhuo MS; Sen, Srijan MD, PhD

doi: 10.1097/ACM.0000000000000898
Institutions and Learning Environment
Free
SDC

Background This large, multi-institutional study examines the relative contribution of residency specialty and institution to resident satisfaction with their learning environment and workload.

Method Survey responses from 798 residents were linked to institution (N = 20) and specialty (N = 10) and to characteristics of individual residency programs (N = 126) derived from the FREIDA Online database. Hierar chical linear modeling was used to estimate relative contributions of these factors to resident satisfaction with workload and learning environment.

Results Institution had greater influence than specialty on resident ratings of satisfaction with their workload and learning environment. Institution and specialty accounted for more variance in satisfaction with workload than with the learning environment. There is evidence that characteristics of a given residency program in a given institution have additional impact beyond these main effects. However, characteristics of institutions or programs, such as program selectivity, off-duty periods, or number of faculty, did not explain statistically significant amounts of variance in resident satisfaction ratings.

Conclusions This study is the first to quantify the degree to which institution and specialty contribute to differences in resident perceptions of their learning environment and workload. Although organizational and institutional cultures are presumed to influence the learning environment, estimating the size of these influences requires a multi-institutional and multispecialty dataset, such as this one. These results suggest that there is empirical justification for institutional interventions to improve the learning environment.

Funding/Support: Funding was provided by the National Institute of Mental Health (R01 MH101459, K23 MH095109).

Other disclosures: None reported.

Ethical approval: The University of Michigan institutional review board approved the protocol for this study. All participating subjects provided informed consent.

Correspondence: Larry D. Gruppen, PhD, Department of Learning Health Sciences, 219 Victor Vaughan House, 111 E. Catherine St., Ann Arbor, MI 48109-2054; e-mail: lgruppen@umich.edu.

The impact of the learning environment on the education and performance of trainees has been a focus of educators, researchers, and administrators for many years in higher education.1,2 Numerous efforts have been made to measure the learning environment in the health professions,3,4 and a variety of interventions have been explored as means to improve the environment.5 Indeed, the potential positive and negative impact of the learning environment has led to the institution of periodic reviews of U.S. residency programs through the Clinical Learning Environment Review process of the Accreditation Council for Graduate Medical Education (ACGME).6,7

Much of the research on the learning environment has been done in the context of undergraduate medical education in formal classroom settings. Much less research has been devoted to the learning environment for residents and the clinical practice environment. Inpatient learning environments are characterized by frequent transitions in location, systems, and colleagues.8 Institutional culture influences the professional behaviors of residents through the environment and the hidden curriculum.9 The impact of transitions on clinical relationships is critical—frequent transitions hinder getting to know the staff and building those new relationships, which leads to less interest in investing in these relationships. Having positive relationships, especially with peers, mitigates stress and anxiety and improves communication. Dimensions that undermine relationships include a lack of dwell time, minimal faculty involvement, and geographic decentralization.8

One of the challenges to studying and improving the learning environment is that it is extremely complex. Theoretical formulations have sought to identify key elements of the learning environment and its effects. Mitchell et al10 reviewed empirical articles that addressed factors influencing resident performance and developed a model that highlighted medical education infrastructure (e.g., educator values, content delivery, program setting), health care system infrastructure (e.g., funding and reimbursement, culture, work flow), and individual physician states (e.g., response to job environment, preferences for practice, learning style) as three macrolevel sets of influences, many of which are embodied in the learners’ perceptions of the learning environment. They also noted that most of the studies were single-institution, cross-sectional, and survey-based studies. Hoff et al11 emphasized the importance of the culture and context of the health care institutions in affecting residency education. Critical characteristics of the work context were identified as fatigue, supervisor structure and access, workload, time, physician/nurse collaborative climate, and work/nonwork balance. Within the work context is the residency culture, which includes such elements as trust, cooperation, systems thinking, support and respect, and habit of inquiry.

A major constraint on developing theory and interventions related to the learning environment is that much of the research on it has been limited to single institutions. To the extent that the learning environment is influenced by institutional characteristics, such as size, location, faculty, resources, and research emphasis, single-institution studies will be unable to detect the impact of such institutional factors simply because there is no variation in those variables. To begin to detect the influence of institution-level factors, multi-institutional studies of the learning environment are necessary.

Even within an institution, there will be microlearning environments in different specialties/programs that reflect differences in culture, workload, organization, staffing composition, etc. Furthermore, within a given training program in a given institution, there will be subordinate learning contexts that each contribute unique opportunities for learning.12 How these microenvironments aggregate to form the larger-scale learning environments that are the unit of analysis in this study is as yet unclear. However, to more completely understand the impact of the learning environment on salient outcomes, studies will need to include intentionally different settings within the institution to be able to detect these influences. Although a few studies have explicitly sought to examine these larger-scale characteristics of the learning environment,13,14 we were unable to find any studies that sampled across institutions and across residency programs simultaneously and examined the contributions of these factors to the evaluation of the learning environment.

The present study seeks to expand our understanding of the influence of institutional and specialty/program factors in influencing resident perceptions of the learning environment. We selected institution and specialty/program as two of many possible large-scale characteristics of the learning environment because these are major factors residents consider in making decisions about future training. They are also reasonable surrogates for numerous, more specific factors that may or may not influence the learning environment (e.g., academic health center versus private hospital, program or institution size, location, level of research emphasis, selectivity of the program). We build on the Intern Health Study, which examined stress during residency across 18 institutions and 10 specialties. Prior work from this study has demonstrated a dramatic increase in depression with the onset of internship15 and a possible increase in medical errors with the 2011 ACGME duty hours changes.16

The sampling of institutions and specialties in this dataset provide the opportunity to explore the extent to which specialties/programs within a given institution are more or less similar. Specifically, the research questions for this study are as follows: (1) What is the relative proportion of variance in resident ratings of satisfaction with their workload or their learning environment that is explained independently by residency specialty and by institution? (2) What program/institutional characteristics predict resident ratings of satisfaction with their workload or their learning environment? We selected resident satisfaction with workload and learning environment because these characteristics are likely to vary substantially among programs and specialties and because prior research has demonstrated that these can be measured efficiently and reliably.17

Back to Top | Article Outline

Method

Data

The procedures used in the Intern Health Study have been detailed previously.15 Briefly, this is a National Institutes of Health–funded prospective longitudinal cohort study of depression and stress during medical internship that includes assessment of various resident characteristics and self-reported duty hours, errors, and satisfaction with their workload and learning environment. The larger study was initiated in 2007 and continues to collect data on over 1,500 interns from over 20 institutions in various specialties each year. The set of institutions comprises a convenience sample of institutions chosen to achieve a balance of academic and community hospitals and geographic balance with all areas of the country represented. In previous reports, we have shown that among those invited, those who chose to participate in the study were slightly younger (27.5 years old versus 28.8 years old) and included a slightly higher percentage of women (50.9% versus 48.6%). There were no significant differences in specialty, institution, or demographic variables between individuals who chose to participate in the current study and individuals who chose not to participate (demographic information for full residency programs provided by the Association of American Medical Colleges).

The present study focuses on resident satisfaction with the learning environment and workload. By sorting residents by their institution and their specialty, we have the opportunity to determine what impact the institutional and specialty cultures have on resident satisfaction with these aspects of the residency experience.

To assess resident workload satisfaction (WS) and assessment of the residency learning environment (LE), residents completed a modified Resident Questionnaire (RQ) in month 12 of their residency. The original RQ17 consists of 33 Likert rating format items, which were grouped into three scales: emotional distress (11 items, Cronbach alpha = 0.84), WS (8 items, alpha = 0.85), and LE satisfaction (9 items, alpha = 0.84). The modified version in this study removed the emotional distress scale because it was highly colinear with depressive and anxiety symptoms that are measured and have been reported in other arms of the Intern Health Study. Thus, we did not assess emotional distress separately in this study. Although WS and LE do not comprehensively assess all aspects of residency, the RQ has been shown to capture critical and variable components of resident perspective of their residency program and institution.17 The 17 items were administered through a secure online Web site designed to maintain confidentiality, with participants identified only by numbers.

There were 20 institutions represented in this dataset: Indiana University–Purdue University at Indianapolis, Greenwich Hospital (Yale), Massachusetts General Hospital, Medical University of South Carolina, University of Cincinnati, Yale University, University of Michigan, Vanderbilt University, University of Iowa, Mt. Sinai School of Medicine, Emory University, University of California San Francisco, University of Connecticut, Wayne State University, Bridgeport Hospital (Yale), University of Massachusetts, University of Southern California, Mayo Clinic, Hospital of St. Raphael, and University of Texas Southwestern.

The participating residents represented 10 different specialties (emergency medicine, family practice, internal medicine, pediatrics, medicine–pediatrics, obstetrics–gynecology, psychiatry, general surgery, transitional, and other). Not all specialties were represented in all of the institutions; a total of 126 individual number residency programs were represented in this dataset.

Characteristics about the individual residency programs were derived from the FREIDA Online database.18 FREIDA Online contains information on over 9,500 graduate medical education programs accredited by the ACGME, as well as over 100 combined specialty programs. Program data for FREIDA Online are collected by the American Medical Association and the Association of American Medical Colleges via an annual national GME census survey. Variables extracted from the FREIDA database include number of residency positions, selectivity (number of positions/number of candidates interviewed), number of faculty (full- and part-time) in the program, distribution of trainees (%) among international medical graduates/DO degree holders/U.S. MD degree holders, distribution of trainees (%) among men and women, average hours per week on duty during first year (excluding beeper call), average number of 24-hour off-duty periods per week during first year, night float system (yes/no), whether the program offers awareness and management of fatigue in residents/fellows (yes/no), average hours per week of regularly scheduled lectures/conferences during first year, and training during first year in ambulatory nonhospital community-based settings, such as physician offices and community clinics (% of total hours).

Back to Top | Article Outline

Analyses

We examined three mixed models (hierarchical linear models) of resident satisfaction with their workload and learning environment. Models were univariate general linear models with restricted maximum likelihood (REML) estimation to reduce random factor variance. REML creates estimates of the mean satisfaction for each level of each random factor. There are a large number of characteristics of institutions and specialties that can impact resident satisfaction with their workload and learning environment. We sought to measure their aggregate impact by treating institution and specialty as random factors in a statistically fair way to estimate this impact without modeling specific characteristics of the institution and specialty that may be responsible. Random effects are unobserved factors that influence the outcome variable. In contrast, resident-level measures from the FREIDA database are directly observed and so can be modeled as fixed factors to estimate their impact. Fixed effects are observed factors that we can model directly. The null hypothesis that institutional or specialty factors have no effect on ratings would mean that the percentage of variance accounted for is 0%.

The first two models contained only random factors, and the third contained both a random factor and a set of fixed factors which were tested using an ANOVA model with Satterthwaite approximation of the error term degrees of freedom after accounting for the random factor.19 All analyses were conducted in R version 3.1.2 with the lmerTest package version 2.0-20 and lme4 package version 1.1-7 for the mixed model analyses, chi-square tests, and ANOVA tests of fixed factors.

Back to Top | Article Outline

Model 1.

We modeled ratings of WS and LE by institution and specialty as separate (crossed) random factors to determine what predicts WS or LE better: knowing a resident’s institution or knowing his/her specialty? We also compared the variance between these estimates with the overall rating variance and expressed this comparison as a percentage of the variance accounted for by each factor. We used likelihood ratio chi-square tests to test the statistical significance of these estimates compared with a null hypothesis of 0% variance.

Back to Top | Article Outline

Model 2.

We then modeled WS and LE at the program level (specialty and institution combined) as a single random factor. We compared the percentage of variance explained in this analysis with that explained by the previous models. If program-level factors have more impact on WS and LE satisfaction, then the percentage of variance explained by this model should be higher than the sum of institution and specialty factors in the previous model.

Back to Top | Article Outline

Model 3.

We then modeled resident satisfaction ratings by program as a random factor and by the variables extracted from the FREIDA database as fixed factors. In addition to testing the statistical significance of the fixed factors, we also compared the percentage of variance explained by program in this model with that in Model 2. An increase in that percentage indicates that there are aspects of programs not included in Model 2 that impact resident ratings.

For all results, we report effect sizes (percentage of variance in the dependent variable accounted for by the independent variable) to allow comparisons of the relative magnitude of influence of the various independent variables. Only statistically significant effects are described.

Back to Top | Article Outline

Results

A total of 798 residents contributed data to these analyses. WS and LE satisfaction are moderately correlated with each other (r = 0.58, t(796) = 20.18, P < .0001), which suggests that they measure related underlying constructs of satisfaction.

For Model 1, institution had more of an effect than specialty on both WS and LE satisfaction. The impact of both institution and specialty was greater for WS than for LE. Table 1 (Model 1) shows the percentage of variance explained by each of these factors.

Table 1

Table 1

The rankings of specialties in terms of mean satisfaction rating varied somewhat between the WS (Figure 1) and LE (Figure 2) dimensions. Pediatrics had the lowest mean satisfaction rating for both outcomes, whereas internal medicine had the highest mean WS, and surgery had the highest LE satisfaction rating.

Figure 1

Figure 1

Figure 2

Figure 2

As would be predicted from the larger percentage of variance accounted for by institution, the contrasts between those with the highest mean satisfaction and those with the lowest are greater than those seen for specialty. Figure 2 illustrates the contrasts for satisfaction with LE. The contrasts for WS are similarly large, but the ranking of the institutions differs somewhat between the two outcome variables, similar to the effects of specialty.

In Model 2, program predicted more variance for both WS and LE than the sum of the effects of institution and specialty in Model 1 (see Table 1).

In Model 3, none of the program-level measures predicted resident WS or LE satisfaction significantly. Parameter estimates were very small, and none yielded a t value greater than 1.3 (P = .198). These nonstatistically significant results could indicate that residents differ in their satisfaction regardless of their program, but the fact that the percentage of variance explained by program is higher in Model 3 than in Model 2 (Table 1) suggests that there are program factors not measured by Model 2 that are affecting residents’ satisfaction. The sample size for Model 3 was 371 because of missing data from the FREIDA database for a number of programs.

Back to Top | Article Outline

Discussion

Our results indicate that residents’ satisfaction with the learning environment and with their workload are closely related. It is also apparent that both workload satisfaction and learning environment satisfaction vary systematically with specialty and institution. The effect of specialty is small, but unlikely to be due to chance (i.e., statistically significant).

To an even greater degree, institutions, independent of specialty, vary in resident satisfaction with workload and learning environment. That institutions vary in learning environment is consistent with models of the complexity of the clinical environment. These differences underscore the importance of institutional investments in resources to support programs and interventions to improve the institutional culture for patients and employees.

However, it was intriguing that even though program and institution influence resident satisfaction with both variables, none of the characteristics of the programs or institutions drawn from the FREIDA database predicted resident satisfaction. This suggests that the influence of institution and residency program are being driven by other characteristics than those reported in the FREIDA database. Future studies should investigate other institutional factors and residency factors, such as feedback frequency, quality of faculty–resident relationship, level of competition or cooperation among residents, institutional and program culture, leadership style, and others that may more directly reflect resident perceptions.

It is also noteworthy that although institution and specialty are statistically significant predictors of these resident satisfaction variables, residency programs account for even more variance. Further, even accounting for program-level variance there is considerable remaining variance that needs to be accounted for by other factors. No model accounted for more than a quarter of the variance between residents’ perceptions. Whether this is a large or small effect size is difficult to judge, given the lack of prior studies and comparable effect sizes. However, the apparently small effects may not be surprising, given the complexity of contextual and environmental factors on resident behavior and attitudes. It is important to recognize that the learning environment is not an objective entity—it is always interpreted through the eyes of the learner. Interventions and programs may be instituted to improve the learning and work environment, but their effect is as much subjective as it is objective.

There are a number of limitations to this study. The information obtained from the FREIDA database is based on institutional self-report and may not be uniformly current on all programs. The programs and institutions were not perfectly cross-matched, leaving gaps in sampling matrix that require us to treat these dimensions as random variables. For instance, only 5 of the 20 programs had medicine–pediatrics departments represented in the sample, so the estimated impact of this specialty is based on a limited sample. Using REML produced conservative estimates of institutional and specialty effects by assuming no effects where there were missing data. Thus, the effects reported here are likely underestimates of the real-world impact of institution and specialty. We do not have any information about specific interventions at the level of the institution or residency program that might have affected the work or learning environments.

Gaps in the FREIDA database also censored the data available for use in Model 3, which examined the relationship of program and institutional character istics to resident satisfaction variables. The smaller dataset leads to less precise estimates of the effect sizes (percentage of variance accounted for), which may explain the lack of statistical significance. It may also introduce bias into the results, but the nature of these potential biases is difficult to estimate without knowing the reasons for the missing data in the FREIDA database.

Future research efforts in this area should continue to compare learning environment outcomes among different institutions and programs. Studies need to better document the magnitude of the effects that institutional and program characteristics might have on these outcomes and use these findings to build more comprehensive theories of the learning environment that simultaneously account for individual and organizational variables.

This study contributes to our understanding of how specialty and institutional factors influence resident satisfaction with their learning environment and workload. It is one of very few studies that directly evaluate institutional and specialty influence on these outcomes and provide estimates of the size of the effects these factors have. As such, it advances beyond the limited scope of single-institution studies of the learning environment, and it helps to reinforce the value of institutional investment in improving the learning and workplace settings.

Back to Top | Article Outline

References

1. Fraser BJ. Learning environment in curriculum evaluation: A review. Eval Educ. 1981;5:1–93
2. Fraser BJ. Research on classroom learning environment in the 1970’s and 1980’s. Stud Educ Eval. 1980;6:221–223
3. Colbert-Getz JM, Kim S, Goode VH, Shochet RB, Wright SM. Assessing medical students’ and residents’ perceptions of the learning environment: Exploring validity evidence for the interpretation of scores from existing tools. Acad Med. 2014;89:1687–1693
4. Roff S. New resources for measuring educational environment. Med Teach. 2005;27:291–293
5. Genn JM. AMEE medical education guide no. 23 (part 2): Curriculum, environment, climate, quality and change in medical education—a unifying perspective. Med Teach. 2001;23:445–454
6. Accreditation Council on Graduate Medical Education. Clinical learning environment review overview. http://www.acgme.org/acgmeweb/Portals/0/PDFs/CLER/CLEROverview_print.pdf. Accessed July 22, 2015
7. Weiss KB, Bagian JP, Nasca TJ. The clinical learning environment: The foundation of graduate medical education. JAMA. 2013;309:1687–1688
8. Bernabeo EC, Holtman MC, Ginsburg S, Rosenbaum JR, Holmboe ES. Lost in transition: The experience and impact of frequent changes in the inpatient learning environment. Acad Med. 2011;86:591–598
9. Hafferty FW, Franks R. The hidden curriculum, ethics teaching, and the structure of medical education. Acad Med. 1994;69:861–871
10. Mitchell M, Srinivasan M, West DC, et al. Factors affecting resident performance: Development of a theoretical model and a focused literature review. Acad Med. 2005;80:376–389
11. Hoff TJ, Pohl H, Bartfield J. Creating a learning environment to produce competent residents: The roles of culture and context. Acad Med. 2004;79:532–539
12. Gofton W, Regehr G. Factors in optimizing the learning environment for surgical training. Clin Orthop Relat Res. 2006;449:100–107
13. Thrush CR, Hicks EK, Tariq SG, et al. Optimal learning environments from the perspective of resident physicians and associations with accreditation length. Acad Med. 2007;82(10 suppl):S121–S125
14. de Oliveira Filho GR, Vieira JE. The relationship of learning environment, quality of life, and study strategies measures to anesthesiology resident academic performance. Anesth Analg. 2007;104:1467–1472
15. Sen S, Kranzler HR, Krystal JH, et al. A prospective cohort study investigating factors associated with depression during medical internship. Arch Gen Psychiatry. 2010;67:557–565
16. Sen S, Kranzler HR, Didwania AK, et al. Effects of the 2011 duty hour reforms on interns and their patients: A prospective longitudinal cohort study. JAMA Intern Med. 2013;173:657–662
17. Seelig CB, DuPre CT, Adelman HM. Development and validation of a scaled questionnaire for evaluation of residency programs. South Med J. 1995;88:745–750
18. FREIDA (Fellowship and Residency Electronic Interactive Database Access) Online. http://www.ama-assn.org/ama/pub/education-careers/graduate-medical-education/freida-online.page. Accessed March 24, 2014
19. Schaalje GB, McBride JB, Fellingham GW. Adequacy of approximations to distributions of test statistics in complex mixed linear models. J Agric Biol Environ Stat. 2002;7:512–524
© 2015 by the Association of American Medical Colleges