Many medical residents experience a marked increase in depression and other mental health problems during residency training.1,2 Depression in residents is associated with suicidal ideation,3–5 motor vehicle accidents,6 medical errors,1,3,7,8 and lower adherence to safety and practice standards,3 indicating that depression among residents has negative consequences for both residents and their patients.
A large body of work has identified individual-level factors that are associated with depression in resident physicians.1,8,9 Specifically, female gender,1,10 high neuroticism,1,10,11 perceived medical errors,1,3,7,8 stressful life events,1,12 and low subjective well-being13 are consistently associated with depression during residency training.
In addition to individual factors, residency programs are likely to play a critical role in the mental health of resident physicians. The clinical learning environment within residency programs has been linked to the quality of resident education,14 performance,15 and well-being.16 A small number of factors related to residency programs have been associated with resident depression, including high effort and low reward imbalance17–19 and a low degree of job autonomy.20 However, to our knowledge, no studies to date have systematically assessed a large sample of residency programs to identify different program-level factors associated with depression.
Here, we conduct a prospective, longitudinal study to assess first-year residents attending 54 internal medicine programs in the United States to investigate the associations between program-level measures of organizational structure, workload, and learning environment and resident depressive symptoms.
Study setting and participants
As part of the Intern Health Study, a prospective longitudinal cohort study of depression and stress during medical internship, individuals who were either graduating from medical school or entering residency at participating institutions were invited to participate in the study.1 In total, we invited 3,317 individuals entering internal medicine programs during the 2012, 2013, 2014, and 2015 academic years to participate in this prospective cohort study via e-mail, two months prior to commencing their internships. E-mail invitations for 82 individuals were returned as undeliverable. A total of 1,941 of the 3,235 remaining individuals who belonged to 239 different internal medicine programs agreed to participate in the study and returned the baseline assessment (overall response rate: 60.0%). To ensure a sufficient number of subjects providing data for each program in a given year, only programs with a minimum of five interns completing at least one of the four follow-up surveys were included. Further, programs were included only if they were listed in the American Medical Association Fellowship and Residency Electronic Interactive Database (FREIDA online).21 A total of 54 programs and 1,276 interns met the criteria and were therefore included in this study. The University of Michigan institutional review board approved the study. All participants provided informed consent and were compensated $50 each.
We conducted all surveys through a secure online website designed to maintain confidentiality, with participants identified only by nondecodable identification numbers. The procedures we used, those from the Intern Health Study, have been detailed in a previous publication.1
Participants completed an online survey two months prior to commencing internship, which included questions about their age; sex; self-reported history of depression; neuroticism22; and early life stress23; as well as an assessment of their depressive symptoms using the Patient Health Questionnaire-9 (PHQ-9) (see Supplemental Digital Appendix 1, available at http://links.lww.com/ACADMED/A621, which includes a copy of the survey instruments). The PHQ-9 is a component of the Primary Care Evaluation of Mental Health Disorders inventory (PRIME-MD) and measures nine self-reported items designed to screen for depressive symptoms.24 For each item, the respondent indicates whether, during the previous two weeks, the depressive symptom had bothered him/her “not at all,” “less than half the days,” “more than half the days,” or “nearly every day.” A score of 10 or greater on the PHQ-9 has a sensitivity of 93% and a specificity of 88% for the diagnosis of major depressive disorder.25
We contacted interns via e-mail at months 3, 6, 9, and 12 of the internship year and asked them to complete the PHQ-9 and a survey1 that inquired about duty hours (“How many hours have you worked in the past week?”). In addition to assessing depressive symptoms and duty hours, the 12-month survey assessed workload satisfaction and learning environment ratings of residency programs through a resident questionnaire (RQ)26 (see Supplemental Digital Appendix 1, available at http://links.lww.com/ACADMED/A621, which includes a copy of the survey instruments). Workload satisfaction (8 items; alpha = 0.85) and learning environment (9 items; alpha = 0.84) components of the RQ have shown to be a valid measure to capture different aspects of residents’ perspectives of their programs.26 The workload satisfaction scale contains items related to call schedule; caseload; excess load; time to read; clerical and administrative support; hospital support services; time demands; and workups. The learning environment scale includes items related to faculty feedback, counseling, and support; learning experience during inpatient rotations and scheduled conferences; instructions received; and cooperation among residents. For each item, we asked interns to indicate whether they agreed with the statements of the instrument using a five-point Likert scale ranging from “strongly disagree” to “strongly agree.” RQ completion was not required for participants’ inclusion in the analyses. Each of the 54 programs assessed by the present study included at least three interns who completed the RQ in the 12th month of their internship, with a range of 3 to 65 interns (interquartile range [IQR] = 12) among the different programs.
Organizational structure of residency programs.
We collected residency program information about size (number of residency positions), number of faculty, proportion of full-time faculty, average hours of scheduled lectures/conferences per week during first year, and whether the program offers awareness and management of fatigue in residents/fellows from FREIDA online.21 Data available on FREIDA online come primarily from the National Graduation Medical Education Census (GME Track), which is an annual online survey conducted by the American Medical Association and the Association of American Medical Colleges.21 In cases where size information for programs could not be accessed from FREIDA online, we sought this information from the institutions’ websites.
We extracted information regarding each program’s research ranking position on March 17, 2017, from Doximity, an online professional network for physicians in the United States.27 Doximity is currently the largest networking community of physicians in the United States, with more than 70% enrollment.28 Doximity calculates a program research output score for each residency program, using the collective h-index of publications authored by alumni graduating within the past 15 years, as well as research grants awarded.27 The research ranking for each program is determined by comparing research output scores within the same specialty.27
Program-level prevalence and change in depressive symptoms.
To estimate the mean prevalence of depression within residency programs, we determined the number of individuals who scored 10 or higher in the PHQ-9 at one or more quarterly assessments. Changes in the depressive symptoms of individual participants (PHQ-change) were calculated by subtracting the baseline score in PHQ-9 from the mean score in PHQ-9 on the quarterly assessments (PHQ-change = mean PHQ-9 at 3-, 6-, 9-, and 12-month assessments − Baseline PHQ-9). We calculated mean PHQ-change for each program to estimate program-level changes in depressive symptoms. The significance of the mean change in scores from baseline to internship year for program-level PHQ-9 depressive symptoms was assessed through a paired t test.
Extraction and transformation of program-level variables.
In addition to the change in mean for depressive symptoms, for each program, we calculated average duty hours and mean RQ scores on learning environment, workload satisfaction, and individual-level items. To control for individual-level factors, we included the factors previously shown to be associated with depression during internship (female gender, baseline PHQ depressive symptoms, childhood stress, and neuroticism).1
Kolmogorov–Smirnov normality tests were conducted for all numerical variables, with square-root transformation applied to variables that did not present a normal distribution. We excluded the variable “offering awareness and management of fatigue in residents/fellows” from the analysis because all programs included in this study reported “yes” for this variable on FREIDA online.
Stability of program-level variables across cohorts.
To determine whether program effects were stable across different cohorts of interns attending the same residency programs, we used Pearson correlations to assess the associations of program-level measures of change on depressive symptoms, learning environment, and workload across initial (2012–2013) and later (2014–2015) cohorts.
Program-level predictors of change in depressive symptoms.
To identify program-level variables associated with change in depressive symptoms within residency programs, we first used Pearson correlations to identify which variables were correlated with mean change in depressive symptoms. Subsequently, significant variables were entered into a stepwise linear regression model to identify significant predictors while accounting for collinearity among variables.
Additionally, to assess whether significant program-level associations were driven by differences between subjects that were present before the start of the internship, we conducted a multivariable stepwise linear regression analysis, including individual-level variables previously associated with change in depressive symptoms.1
Cross-cohort predictors of depressive symptoms.
To accurately estimate the effect size of program-level predictors of change in depressive symptoms, we performed a two-step secondary analysis using data from 2012 and 2013 cohorts as a training set to identify significant predictors (2012–2013 training), and data from 2014 and 2015 cohorts as a test set for our predictions (2014–2015 test).
First, we conducted Pearson correlations to identify which program-level variables were associated with mean change in depressive symptoms in the 2012–2013 training set, and entered all significant variables into a stepwise linear regression model. As a second step, we tested the regression model constructed using the 2012–2013 training set to predict mean change in depressive symptoms in the 2014–2015 test set. By testing whether significant program-level predictors identified for one cohort of program interns predicted change in depressive symptoms for a different cohort of interns in the program, this cross-cohort analysis has the potential to provide a more accurate estimate of the effect of program-level predictors of change in depressive symptoms while minimizing the possible influences that interns’ individual characteristics could have on their ratings of the program variables assessed.
All analyses were performed using SPSS statistical software, version 21.0 (SPSS Inc., Chicago, Illinois).
Representativeness of the sample
A total of 1,276 individuals from 54 programs participated. Compared with the set of all internal medicine programs registered on FREIDA, programs included in this study had a similar gender distribution (581/1,276; 45.5% versus 42.2% of females) but had a larger average program size (123 versus 55 residents).29 The number of individuals included per program ranged from 5 to 101. Characteristics of residents and programs included in the study are presented in Table 1.
Program-level prevalence and change in depressive symptoms
Considering the criteria for major depression developed by Kroenke et al25 (PHQ-9 ≥ 10), the mean program prevalence of individuals who met such criteria for depression at one or more quarterly assessments was 36.6% (SD = 17.8%), with prevalence rates ranging from 0.0% to 80.0% (IQR = 15.4%) among the 54 programs included in the full study set (2012–2015; N = 1,276).
The mean program score on PHQ-9 changed from 2.3 (SD = 0.8) pre internship to 5.9 (SD = 1.8), 6.0 (SD = 1.5), 5.9 (SD = 2.1), and 5.5 (SD = 1.8) at 3, 6, 9, and 12 months of internship, respectively. For 53 of 54 programs, mean scores of depressive symptoms increased from baseline to quarterly assessments. The mean program-level change in depressive symptoms from pre internship to internship was 3.6 points (SD 1.4, paired t test 19.5, P < .001), with a range of −0.3 to 8.8 (IQR = 1.1) among the different programs included.
Stability of program-level variables across cohorts
To evaluate whether workload, learning environment, and change in depressive symptoms were stable across different cohorts of interns attending the same residency programs, we performed correlational analysis for the 49 programs (N = 1,248) across 2012–2015 cohorts. Program-level changes in depressive symptoms (r = 0.30; P = .037) and ratings of workload (r = 0.61; P < .001) and learning environment (r = 0.34; P = .032) were significantly associated across cohorts. This suggests that effects of programs on resident mental health, workload, and learning environment were relatively stable across time, supporting the subsequent investigation of program-level variables with resident depressive symptoms.
Predictors of program-level change in depressive symptoms
Table 2 presents the associations of program-level variables with change in depressive symptoms for the full study set (2012–2015).
When the 14 variables that were significantly associated with change in depressive symptoms were entered into a stepwise linear regression, timely and appropriate faculty feedback (P = .005), mean duty hours per week (P = .002), learning experience in inpatient rotations (P = .04), and research ranking position (P = .03) remained significant and explained 45.7% of the variance in program-level change in depressive symptoms (Table 3).
Similarly, multivariable analysis adjusted for individual-level variables previously associated with depression during internship (i.e., female gender, self-reported history of depression, childhood stress, and neuroticism)1 confirmed faculty feedback (P = .03), mean duty hours per week (P = .002), learning experiences during inpatient rotations (P = .009), and research ranking position (P = .04) as significant program-level predictors of change in depressive symptoms (Table 3).
Predictors of change in depressive symptoms across cohorts
To accurately estimate the effect size of program-level predictors of change in depressive symptoms, we used the 2012–2013 cohorts as a training set to identify significant predictors of mean change in depressive symptoms within residency programs, and the 2014–2015 cohorts as a test set to estimate the effect size in an independent dataset.
The regression model for the 2012–2013 training identified three significant variables (faculty feedback, β = −0.34, P = .011; rotation value, β = −0.42, P= .002; and mean work hours, β = 0.38, P = .001). When this model was applied to the prediction of change in depressive symptoms in the 2014–2015 test set, 20.2% of the variance in program-level change in depressive symptoms was explained (Table 4).
This study systematically assessed a large set of internal medicine residency programs to identify program-level factors that are associated with resident depression. We found that rates of depression between internal medicine residency programs vary widely. Importantly, we also found that the rate of depression among interns within programs is relatively consistent across independent cohorts of interns, providing additional evidence that programs play an important role in the development of resident depression.
A lack of timely and appropriate faculty feedback, a negative learning experience during inpatient rotations, increased work hours, and higher institutional research rankings were associated with a greater increase in depressive symptoms during the internship year. Importantly, these findings suggest that the residency program environment plays a central role in the mental health of medical interns. These program-level factors can inform changes to residency programs that may reduce the risk of depression in resident physicians.
The finding that program-level duty hours were associated with resident depressive symptoms complements the findings of prior studies linking individual-level duty hours and resident depression.1,3,12 Residents’ rating of whether they received timely and appropriate feedback from the faculty was actually the strongest predictor of program-level changes in depressive symptoms. Previous studies have shown the association between appropriate faculty feedback and better performance,30 education,31,32 and lower levels of burnout33 in medical residents. Our findings also suggest that timely and appropriate faculty feedback may help to reduce resident depression. Interventions to promote better faculty feedback are challenging because, despite all of the recognized relevance of faculty feedback in medical education,30,31,34 previous studies have shown different barriers to proper faculty feedback,35–37 including the fact that trainees and faculty may have different perceptions about the timing, content, and appropriateness of the feedback given and received.38,39 Further studies should investigate the specific characteristics of faculty feedback that are associated with better mental health in resident physicians, so that residency programs can invest in systematic interventions to promote proper faculty feedback.
Poor learning experience during inpatient rotations was also predictive of a greater increase in depressive symptoms from baseline to quarterly assessments. Previous studies have discussed the impact of different models of rotations on resident competency and patient safety.40,41 Given our findings, further examinations could also focus on identifying specific aspects of inpatient rotations associated with better resident mental health and satisfaction with learning experience. Additional studies exploring how to improve teaching quality during inpatient rotations and its impact on resident depression are also needed.
Higher institutional research ranking position was associated with a greater increase in resident depressive symptoms during internship. There were no associations between baseline characteristics of participants and Doximity research ranking, suggesting that this association was not driven by individuals predisposed to depression selected for high-ranking research programs. It is possible that research-intensive institutions impose pressure to meet a higher level of productivity in research, as well as clinical domains, which can increase the risk for depression. Alternatively, Doximity research rankings may be a proxy for other characteristics of residency programs. For instance, research-intensive institutions may have a culture42,43 that values research productivity at the expense of clinical excellence. Alternatively, the nature and complexity of patients at research-intensive institutions may be different from less research-intensive institutions. Because this was the first study to explore the associations between research output ranking and program-level change in depressive symptoms, more studies are needed to better elucidate the aspects of the identified relationships between these variables.
The present study has several additional limitations. First, the wide range in the number of interns included per program (5–101) may have introduced inclusion bias to this study sample. Although there was no significant association between prevalence of depression at baseline and internship year, it is possible that programs with lower response rates had residents with different levels of depressive symptoms than the ones included in our analysis.
Second, all our assessments were conducted during the internship year. Therefore, such findings may not be generalizable to later years of residency training. In addition, because we only included internal medicine programs, our findings may not be generalized to other specialties.
Third, considering that most of the programs included in this study were large university-based institutions, generalization of our findings to smaller community-based programs should be made with caution.
Fourth, the self-reported nature of depression, duty hours, workload, and learning environment assessments also constitutes an additional limitation of our study. Although the validity and reliability of PHQ-9 are strong,24,25 it is important to highlight that its results do not constitute a definitive diagnosis. With regard to program-level duty hours, although previous studies have shown that self-reported work hours match well with electronic recordings,44 bias in the number of reported hours worked could be present in our data. In addition, even though Doximity currently enrolls more than 70% of physicians in the United States,28 the use of its research output data without further evidence of their validity and reliability requires caution in interpreting our findings related to programs’ research rankings.
Fifth, definitive conclusions about causal relationships cannot be drawn from observational studies. For instance, it is possible that the association between program-level depression and rating of program learning environment could be driven by depressive residents reporting lower satisfaction with the characteristics of their program. However, a number of analyses from our study strongly suggest that at least part of the identified associations between specific program factors and resident depression is due to residency factors. First, there is a large variation in the magnitude of the increase in depressive symptoms between programs. Further, the level of depressive symptoms is relatively stable for a given program across independent cohorts of residents, indicating that the large variation in depressive symptoms between programs can be attributed to program features rather than the individuals within the program in a given cohort. Finally, individual variables such as history of depression, depressive symptoms, and neuroticism measured at baseline, before subjects are exposed to program environments, did not predict the ratings of residency factors associated with depression (faculty feedback, rotation value, and duty hours). These analyses all suggest an important role of residency factors in contributing to residents’ depression.
In summary, this large prospective longitudinal study found that the level of depressive symptoms varies widely among different internal medicine residency programs and that a considerable amount of this variance can be partially explained by program-level variables: timely and appropriate feedback from faculty, learning experience during inpatient rotations, duty hours, and program research ranking position. These factors are potentially valuable targets for intervention to improve the wellness and mental health of residents. Future studies could consider a qualitative approach to identify additional variables that distinguish programs with high and low rates of resident depression. In addition, further studies could assess whether specific interventions and changes, targeting the factors identified by this study, reduce the high rates of depression among resident physicians.
Acknowledgments: The authors acknowledge and thank the interns who took part in this study.
1. Sen S, Kranzler HR, Krystal JH, et al. A prospective cohort study investigating factors associated with depression during medical internship. Arch Gen Psychiatry. 2010;67:557565.
2. Mata DA, Ramos MA, Bansal N, et al. Prevalence of depression and depressive symptoms among resident physicians: A systematic review and meta-analysis. JAMA. 2015;314:23732383.
3. de Oliveira GS Jr, Chang R, Fitzgerald PC, et al. The prevalence of burnout and depression and their association with adherence to safety and practice standards: A survey of United States anesthesiology trainees. Anesth Analg. 2013;117:182193.
4. Tyssen R, Vaglum P, Grønvold NT, Ekeberg O. Suicidal ideation among medical students and young physicians: A nationwide and prospective study of prevalence and predictors. J Affect Disord. 2001;64:6979.
5. Center C, Davis M, Detre T, et al. Confronting depression and suicide in physicians: A consensus statement. JAMA. 2003;289:31613166.
6. West CP, Tan AD, Shanafelt TD. Association of resident fatigue and distress with occupational blood and body fluid exposures and motor vehicle incidents. Mayo Clinic Proc. 2012;87:11381144.
7. West CP, Tan AD, Habermann TM, Sloan JA, Shanafelt TD. Association of resident fatigue and distress with perceived medical errors. JAMA. 2009;302:12941300.
8. Fahrenkopf AM, Sectish TC, Barger LK, et al. Rates of medication errors among depressed and burnt out residents: Prospective cohort study. BMJ. 2008;336:488491.
9. Bellini LM, Baime M, Shea JA. Variation of mood and empathy during internship. JAMA. 2002;287:31433146.
10. Guille C, Clark S, Amstadter AB, Sen S. Trajectories of depressive symptoms in response to prolonged stress in medical interns. Acta Psychiatr Scand. 2014;129:109115.
11. Clark DC, Salazar-Grueso E, Grabler P, Fawcett J. Predictors of depression during the first 6 months of internship. Am J Psychiatry. 1984;141:10951098.
12. Fried EI, Nesse RM, Guille C, Sen S. The differential influence of life stress on individual symptoms of depression. Acta Psychiatr Scand. 2015;131:465471.
13. Grant F, Guille C, Sen S. Well-being and the risk of depression under stress. PLoS One. 2013;8:e67395.
14. Nasca TJ, Weiss KB, Bagian JP. Improving clinical learning environments for tomorrow’s physicians. N Engl J Med. 2014;370:991993.
15. Weiss KB, Bagian JP, Nasca TJ. The clinical learning environment: The foundation of graduate medical education. JAMA. 2013;309:16871688.
16. Jennings ML, Slavin SJ. Resident wellness matters: Optimizing resident education and wellness through the learning environment. Acad Med. 2015;90:12461250.
17. Buddeberg-Fischer B, Klaghofer R, Stamm M, Siegrist J, Buddeberg C. Work stress and reduced health in young physicians: Prospective evidence from Swiss residents. Int Arch Occup Environ Health. 2008;82:3138.
18. Sakata Y, Wada K, Tsutsumi A, et al. Effort–reward imbalance and depression in Japanese medical residents. J Occup Health. 2008;50:498504.
19. Li J, Weigl M, Glaser J, Petru R, Siegrist J, Angerer P. Changes in psychosocial work environment and depressive symptoms: A prospective study in junior physicians. Am J Ind Med. 2013;56:14141422.
20. Weigl M, Hornung S, Petru R, Glaser J, Angerer P. Depressive symptoms in junior doctors: A follow-up study on work-related determinants. Int Arch Occup Environ Health. 2012;85:559570.
21. FREIDA online. American Medical Association fellowship and residency electronic interactive database. https://freida.ama-assn.org/Freida/#
. Accessed November 10, 2018.
22. Costa PT Jr, McCrae RR. Stability and change in personality assessment: The revised NEO Personality Inventory in the year 2000. J Pers Assess. 1997;68:8694.
23. Taylor SE, Way BM, Welch WT, Hilmert CJ, Lehman BJ, Eisenberger NI. Early family environment, current adversity, the serotonin transporter promoter polymorphism, and depressive symptomatology. Biol Psychiatry. 2006;60:671676.
24. Spitzer RL, Kroenke K, Williams JB. Validation and utility of a self-report version of PRIME-MD: The PHQ primary care study. Primary Care Evaluation of Mental Disorders. Patient Health Questionnaire. JAMA. 1999;282:17371744.
25. Kroenke K, Spitzer RL, Williams JB. The PHQ-9: Validity of a brief depression severity measure. J Gen Intern Med. 2001;16:606613.
26. Seelig CB, DuPre CT, Adelman HM. Development and validation of a scaled questionnaire for evaluation of residency programs. South Med J. 1995;88:745750.
27. Doximity. Residency navigator methodology. https://residency.doximity.com/methodology?_remember_me_attempted=yes
. Accessed November 10, 2018.
28. Doximity reaches over 70% of U.S. physicians. Doximity blog. https://blog.doximity.com/articles/we-re-proud-to-serve-70-of-the-nation-s-physicians
. Published February 22, 2017. Accessed November 10, 2018.
29. FREIDA online. American Medical Association fellowship and residency electronic interactive database. Online specialty training search: Internal medicine. https://freida.ama-assn.org/Freida/user/specStatisticsSearch.do?method=viewDetail&pageNumber=2&spcCd=140
. Published 2016. Accessed November 10, 2018.
30. Veloski J, Boex JR, Grasberger MJ, Evans A, Wolfson DB. Systematic review of the literature on assessment, feedback and physicians’ clinical performance: BEME guide no. 7. Med Teach. 2006;28:117128.
31. Ende J. Feedback in clinical medical education. JAMA. 1983;250:777781.
32. Minehart RD, Rudolph J, Pian-Smith MC, Raemer DB. Improving faculty feedback to resident trainees during a simulated case: A randomized, controlled trial of an educational intervention. Anesthesiology. 2014;120:160171.
33. Ripp J, Babyatsky M, Fallar R, et al. The incidence and predictors of job burnout in first-year internal medicine residents: A five-institution study. Acad Med. 2011;86:13041310.
34. Simon SR, Sousa PJ, MacBride SE. The importance of feedback training. Acad Med. 1997;72:12.
35. Mitchell JD, Holak EJ, Tran HN, Muret-Wagstaff S, Jones SB, Brzezinski M. Are we closing the gap in faculty development needs for feedback training? J Clin Anesth. 2013;25:560564.
36. Bing-You RG, Trowbridge RL. Why medical educators may be failing at feedback. JAMA. 2009;302:13301331.
37. Mitchell JD, Jones SB. Faculty development in feedback provision. Int Anesthesiol Clin. 2016;54:5465.
38. Sender Liberman A, Liberman M, Steinert Y, McLeod P, Meterissian S. Surgery residents and attending surgeons have different perceptions of feedback. Med Teach. 2005;27:470472.
39. Gil DH, Heins M, Jones PB. Perceptions of medical school faculty members and students on clinical clerkship feedback. J Med Educ. 1984;59(11 pt 1):856864.
40. Holmboe E, Ginsburg S, Bernabeo E. The rotational approach to medical education: Time to confront our assumptions? Med Educ. 2011;45:6980.
41. Napolitano LM, Biester TW, Jurkovich GJ, Buyske J, Malangoni MA, Lewis FR Jr; Members of the Trauma, Burns and Critical Care Board of the American Board of Surgery. General surgery resident rotations in surgical critical care, trauma, and burns: What is optimal for residency training? Am J Surg. 2016;212:629637.
42. Shanafelt TD. Enhancing meaning in work: A prescription for preventing physician burnout and promoting patient-centered care. JAMA. 2009;302:13381340.
43. Balch CM, Freischlag JA, Shanafelt TD. Stress and burnout among surgeons: Understanding and managing the syndrome and avoiding the adverse consequences. Arch Surg. 2009;144:371376.
44. Todd SR, Fahy BN, Paukert JL, Mersinger D, Johnson ML, Bass BL. How accurate are self-reported resident duty hours? J Surg Educ. 2010;67:103107.