Journal Logo

Multi-institutional collaborative and QI network research

Increasing Recognition and Diagnosis of Adolescent Depression: Project RedDE: A Cluster Randomized Trial

Rinke, Michael L. MD, PhD*; Bundy, David G. MD, MPH; Stein, Ruth E.K. MD*; O’Donnell, Heather C. MD, MS; Heo, Moonseong PhD§; Sangvai, Shilpa MD, MPH; Lilienfeld, Harris MD; Singh, Hardeep MD, MPH**

Author Information
doi: 10.1097/pq9.0000000000000217

Abstract

INTRODUCTION

One in 8 adolescents had at least 1 major depressive episode in the prior year, and up to 20% of adolescents experience a major depressive episode.1–7 Over 28% of adolescents report depressive symptoms every day for 2 or more weeks, and 7.4%–7.8% of adolescents attempt suicide annually.8,9 Depressed adolescents experience damaging effects on academics and relationships and are at increased risk for substance use, social impairment, and depression later in life.3,4 The US Preventive Services Task Force and the American Academy of Pediatrics (AAPs) recommend screening adolescents for depression.10–13 Improving rates of timely adolescent depression diagnosis is crucial to prevent life-altering comorbidities.

Depressed adolescents are often seen in the ambulatory setting,14,15 but provider recognition of depression is low,16–18 potentially due to lack of provider knowledge and time during visits.19–21 Also, adolescents may have symptoms of irritability rather than sadness and rarely present to physicians with a mood complaint, making their diagnosis harder.22,23 Although treatment of adolescent depression can reduce symptoms, morbidity, and mortality,24,25 few adolescents receive treatment.22 Recognition of adolescent depression and follow-up are important, and reducing missed diagnoses of adolescent depression is a priority among pediatricians.26 By focusing on improved diagnostic accuracy, providers can reduce morbidity and mortality caused by adolescent depression.

This study’s objective was to determine whether a quality improvement (QI) collaborative (QIC) intervention could increase the frequency of recognition and diagnosis of adolescent depression and sustain rates over 16 months while clinicians concurrently worked to improve other diagnoses. The primary intervention, a QIC, is an organized, multifaceted collaborative approach to QI with (1) a specific topic for improvement with large practice variation; (2) clinical and QI experts sharing best practices; (3) multidisciplinary teams from multiple sites willing to improve; (4) a model for improvement with measurable targets, data feedback, and small tests of change; and (5) a series of structured activities to advance improvement, exchange ideas, and share experiences.10,27–32 The intervention was tested via a prospective, stepped-wedge cluster randomized trial in a national group of pediatric primary care clinics.

METHODS

Study Design

As described previously,33,34 Project RedDE (Reducing Diagnostic Errors in Pediatric Primary Care) aimed to improve diagnostic performance in primary care pediatrics, in collaboration with the AAP’s Quality Improvement Innovation Networks (QuIIN). QuIIN aims to “improve the quality and value of care and outcomes for children and families” via QI networks. Increasing diagnoses of adolescent depression was 1 of 3 QIC topics. A group of experts developed interventions and measures and directed the QIC.

Randomization

Thirty-four “wave 1” pediatric practices were recruited in March 2015 and randomized via computer random number generator in a nonblinded fashion, to 1 of 3 clusters. We performed multivariate matching before randomization35 based on university affiliation, self-reported prior work to improve performance on these diagnoses, and total annual visits per practitioner equivalents. Nine practices dropped out after randomization but before submitting data due to inability to collect necessary baseline data. Twenty-four of the remaining 25 practices submitted complete data for the project; 1 practice dropped out after 8 months due to loss of their lead physician. This practice’s data were included as they submitted control and intervention data, but not long-term follow-up data. To increase sample size, we recruited and randomized 9 additional practices in December 2015 (wave 2). Of these, 2 practices dropped out due to data collection issues before the intervention, and 2 other practices from a single care network merged into 1 “team” to boost their sample size (Fig. 1). The 6 wave 2 practices participated alongside the wave 1 teams. Forty-three total practices were randomized, and we included 31 in the final analysis.

Fig. 1.
Fig. 1.:
Modified consort flow diagram for cluster randomized stepped-wedge trial. *One practice in group 1 withdrew after the first phase of the project. Their data were included in the primary analysis.

Each of the 3 clusters intervened on the same diagnoses, but in a different order (Fig. 2). Each cluster was assigned to collect retrospective baseline data (February 2015–June 2015) on 1 of 3 diagnoses: missed diagnosis of elevated blood pressure, delayed diagnosis of abnormal laboratory values, or increasing the recognition and diagnosis of adolescent depression. Clusters collected 1 month of prospective baseline data (September 2015) to ensure comparability between retrospective and prospective data collection, and in October of 2015, began to work to improve the first assigned diagnosis through May 2016. Concurrently, each cluster collected control data on a second diagnosis. In a prospective, stepped-wedge fashion, after 8 months (June 2016), each cluster then began to work to improve a second diagnosis, sustain the improvement on their first diagnosis, and act as a control group for the third diagnosis. In February 2017, each cluster began to work to improve the third diagnosis, sustain improvement on their second diagnosis, and maintain improvement on their first diagnosis with reduced feedback and attention on that diagnosis from the QIC (Fig. 2). As wave 2 practices entered the collaborative after the first action period, these practices intervened on only 2 of the 3 diagnoses.

Fig. 2.
Fig. 2.:
Project RedDE timeline for adolescent depression. aPractices were involved in Project RedDE during this time but working exclusively on the 2 nondepression errors. Practices in groups 2 and 3 had already worked to reduce 1 or 2 other DEs before beginning to work on depression errors. bDuring the sustain and maintenance phases, practices began working to reduce a second and third DE, respectively. cWave 2 practices integrated alongside wave 1 practices, intervening first on wave 1’s second DE. These practices never intervened on a third DE. DEs indicates diagnostic errors.

Each cluster had a “control phase” when they collected data on adolescent depression frequency before attempting to increase its identification and all but 1 wave 2 group had an “intervention phase” when they actively worked to increase adolescent depression diagnosis frequency. Two clusters had a “sustain phase” when they actively worked to improve performance on an additional diagnosis and sustain improvement on adolescent depression frequency; 1 cluster had a “maintenance phase” when they actively worked to improve performance on 2 other diagnoses and maintain improvement on adolescent depression frequency.

Intervention

Each practice identified a 3-person QI team consisting of at least a physician, a nurse, and another professional. Teams participated in a 2-day interactive video learning session during which they learned and practiced QI methodology and diagnosis-specific content related to the first targeted diagnosis. They then received rapid, transparent data feedback with benchmarking, participated in monthly, hour-long video conferences, and completed monthly mini root cause analyses. Mini root cause analyses examined 15 standardized patient and system factors that could have led to a patient with depression not being recognized or diagnosed.36,37 Additional day-long video learning sessions were conducted every 8 months as practices transitioned to working on a new diagnosis (Fig. 2). Although clusters were working on their second diagnosis, monthly video conferences provided transparent data feedback from both their first and second diagnoses; when working on their third diagnosis, monthly video conferences presented data from their second and third diagnoses, and data from their first diagnosis were only presented quarterly. We provided each practice a QI coach, and each cluster had an interactive email listserv and cluster-specific website with resources.

During learning sessions, clusters about to intervene on adolescent depression were taught about this condition, its importance, and the utility of screening; how to use the Pediatric Health Questionnaire 9-Modified (PHQ-9M);12 how to assess and diagnose adolescents who screened positive; and how to begin treatment or create referrals depression diagnosis to mental health practitioners. The PHQ-9M is a screening tool to identify patients who need depression diagnosis evaluation. The PHQ-9M consists of 9 questions about depression symptom frequency in the prior 2 weeks, 1 question about symptoms interfering with daily tasks, 1 question about depressed feelings in the past year, and 1 question each about suicidal thoughts and attempts. The Resource for Advancing Children’s Health (REACH) Institute’s Patient-centered Mental Health in Pediatric Primary Care Program’s training materials were used with permission.38 Teams were taught how to use the Columbia Depression Scale and Global Assessment Scale. These tools, including the PHQ-9M and Guidelines for Adolescent Depression in Primary Care (GLAD-PC), were chosen as they are AAP endorsed.12

Teams received adolescent depression-related tools via a “change package” which was modeled on the REACH Institute’s materials and the GLAD-PC.12,38 Steps in adolescent depression diagnosis and care had accompanying tools: screening, recognizing abnormal screens, diagnosing depression, discussing with families, screening for suicidality, referring to mental health and/or providing treatment, and communicating with mental health providers. Finally, information about coding and billing was shared. Resources were maintained on the Project RedDE website. All QIC resources were made available to the public following the project’s conclusion.39

Outcome Measures

Pragmatic outcome measures were employed that took into account (1) a prevalent and harmful underlying condition with preventable morbidities; (2) interest from pediatricians;26 (3) the key steps involved in diagnosis; and (4) feasibility of data collection by busy clinicians. Adolescent depression clearly fits this description as patients diagnosed with and treated for depression have reduced risk for suicidality, self-esteem, behavior problems, and functional problems.40,41 In addition, pediatricians reported interest in improving depression diagnostic performance,26 and diagnoses are present upon chart review.12 Predata collection webinars, slides, and written definitions emphasized measurement concepts; email listservs and the research team provided clarifications. A refresher data collection webinar was held half-way through the project, but direct data validation was beyond the scope of this study.

Although depressive symptoms (eg, poor school performance, interrupted sleep patterns, and increased disruptive behaviors) without appropriate provider identification or referral were considered as the primary outcome, pilot data suggested the number of times a patient has documented signs and symptoms of depression but is not referred to or already receiving mental health treatment is rare, and this methodology likely underidentifies adolescent depression.17 Thus, we used a proxy primary outcome measure for poor adolescent depression diagnostic performance: the frequency of adolescent depression recognition and diagnosis, which increases as missed diagnoses decrease. Given prior literature,16–18 it is reasonable to assume an underdiagnosis of adolescent depression. Practices identified the percent of adolescents who carried diagnoses of depression, dysthymia, or subsyndromal depression in visit notes, problem lists or billing records (International Classification of Diseases-9/10 codes 296.2, 296.3, 311, 311.0, 300.4, 309.0, 309.1, 309.4; F32.0-5, F32.9, F33.0-4, F33.8-9, F34.1, F43.21, F43.25, F06.3X). These were not patients who only screened positive on a PHQ-9M screen, but those who were ultimately given, at that visit or within 30 days of the visit, a diagnosis of depression. Subsyndromal depression was defined as “a depressive state having two or more symptoms of depression of the same quality as in major depression (MD), excluding depressed mood and anhedonia.”42 We included patients if they were 11–23 years old and attending a health supervision visit. Eleven years old was chosen because the AAP recommended screening at this age.13 Charts were checked 30 days after the visit.

The secondary outcome measure was whether a provider documented depression concerns or the exclusion of concerns at every adolescent health supervision visit, either with formal screening tools or clinical judgment. This measure examines if clinicians take advantage of health-care maintenance visits to recognize whether a patient does or does not have depression as suggested by the US Preventive Services Task Force and the AAP.11–13 In addition, practices tracked a process measure: the percent of eligible patients who received the PHQ-9M screen. Teams were required to report this process measure during the intervention phase until at least 90% of eligible patients received the screen for 2 consecutive months. The primary and secondary outcome data were collected for the length of the project.

In the initial retrospective phase, practices examined the first 10 patients who met inclusion criteria for the primary and secondary outcomes monthly. During the action periods, practices examined the first 17 patients monthly who met inclusion criteria, based on predetermined power analyses. If practices had fewer eligible patients in a given month, they entered all available data. For each patient, practices recorded deidentified data, including age, sex, and insurance status, in a secure online portal. Insurance status was included as a potential confounder because it is an easily collectible, partial marker of socioeconomic status, which is associated with ambulatory errors and adolescent depression.43,44 Practices were provided a paper data abstraction tool, but anecdotally many practices entered data directly into the portal.

Statistical Analysis

Using patients as the unit of analysis, we compared the primary outcome of mean number of adolescent patients diagnosed with depression per 100 adolescent patients seen for health supervision visits across all practices’ intervention and control phases. The outcomes are presented as model-based estimated risk differences (RDs) comparing intervention versus control phases. Generalized mixed-effects regression models with the identity link, adjusted for age, sex, insurance status, and wave, were applied with month-specific and practice-specific intercepts considered random. We only included patients with complete demographic data. Our power analysis was revised based on group 1 baseline data.33 The minimally detectable RD effect sizes with >80% power at a 2-sided significance level equal to 0.05 with correlations of outcomes across months within practices equal to 0.05 and across charts within periods equal to 0.5 were identified as at least 5.1%.

Similar models examined (1) the secondary outcome and (2) any differences in the intervention and sustain phases and sustain and maintenance phases (Fig. 2). The latter analyses investigated whether practices could sustain improvements while working to improve a second diagnosis and/or maintain improvements while working to improve 2 additional diagnoses with attention diverted from adolescent depression. Process measure data were compared between collaborative groups using Kaplan–Meier analysis and log-rank tests. We additionally examined aggregated primary and secondary outcomes using statistical process control p charts, with Nelson rules45 signifying changes. The intervention’s initiation was adjusted so each group began the intervention on “month 1.” Small multiple p charts identified trends across groups and variation between clinics.

Intervention versus control patient demographics were compared with chi-square tests. All data analyses were performed using SAS v9.4. The institutional review boards of the AAP and the Albert Einstein College of Medicine approved this study. Clinical trial registration number is for Project RedDE is NCT02798354 (clinicaltrials.gov).

RESULTS

Thirty-one practices were included in the primary outcome analysis (Table 1), all of which used an electronic health record. Data on 3,394 patient visits were entered for the control phase and 4,114 for the intervention phase. We excluded 295 patients from the final model due to missing insurance data. There were younger and more nonprivately insured patients in the intervention phase (Table 2).

Table 1.
Table 1.:
Demographics of Included Practices at Baseline: N (%)
Table 2.
Table 2.:
Demographics of Included Adolescent Patients in Primary Analysis

The adjusted percentage of patients with depression, dysthymia, or subsyndromal depression in the control phase was 6.6%, compared with 10.5% in intervention phase (RD 3.9%; 95% CI 2.4%, 5.3%, P < 0.0001). Practices sustained and maintained these improvements (Table 3): the mean percentage of patients with depression was not different comparing the intervention with the sustain phase (RD −0.4%; 95% CI −2.3, 1.4%, P = 0.642) nor the sustain phase to the maintenance phase (RD −0.1%; 95% CI −2.7%, 2.4%, P = 0.911).

Table 3.
Table 3.:
Primary and Secondary Outcome Results

The secondary outcome, identifying when a provider pursued an evaluation for adolescent depression, also improved during the control versus intervention phases (RD 26.6%; 95% CI 22.4%, 30.7%, P < 0.0001). Improvement continued during the intervention versus sustain phase (RD 18.6%; 95% CI 14%, 23.2%, P < 0.0001) and sustain versus maintenance phases (RD 7.4%; 95% CI 2.3%, 12.6%, P = 0.005).

Figure 3 demonstrates a significant shift began with the first intervention month for the primary outcome and with the second intervention month for the secondary outcome. Variation is observed in the small multiple p charts (See figure, Supplemental Digital Content, available at http://links.lww.com/PQ9/A137) between and within groups, with some clinics still not at 10% depression incidence by the conclusion of the intervention and some at 10% in the baseline period. There is continued variation within clinics as the expected number of patients with depression for any given clinic per month was 1.7 patients: some clinics have months with zero diagnoses and months with 4 to 5 diagnoses.

Fig. 3.
Fig. 3.:
p charts of primary and secondary outcomes UCL, upper control limit; LCL, lower control limit.

There were no differences in the Kaplan–Meier analysis comparing the time to 2 months with 90% or more patients screened with the PHQ-9M across the 3 clusters (log-rank test P = 0.534). Fifty percent of practices reached this threshold at 5 months, and 70% reached at the end of the intervention phase.

DISCUSSION

In a cluster randomized, stepped-wedge trial involving a national cohort of pediatric practices, a QIC successfully increased the percent of adolescents who carried diagnoses of depression from 6.6% to 10.5% and sustained this improvement over 16 months. This change was also evident in statistical process control analyses. Practices were able to screen more systematically with an appropriate tool and also improved providers pursuing a diagnosis of depression at health supervision visits, a key step in recognizing a depressed adolescent. This type of diagnostic performance improvement strategy can potentially be applied across other mental health conditions.

Missed opportunities to diagnose depression occur in approximately 60% of adolescents,33 and such high-frequency errors are a priority for pediatricians.26 Our data support prior studies suggesting underdiagnosis of adolescent depression is common in pediatrics16–18 and illustrates a methodology to reduce these misdiagnoses through collaboration, data benchmarking, QI coaching, and focusing on failures. Systematizing office practices to ensure screening with the PHQ-9M may improve diagnosis rates, as this process measure increased with depression diagnoses. Although measuring time to treatment and symptom relief was beyond the scope of this project, many practices anecdotally reported increased confidence in managing mild to moderately depressed adolescents, increased communication and collaboration with mental health practitioners, and improved outcomes for patients they otherwise would not have suspected of having depression. Given the sustainability of improvements when practices were focused on other diagnoses, we hypothesize that the change seen can be replicated in practices without an extensive QIC infrastructure, because improvements were consistent when the QIC was not focused on depression diagnosis. Further work is needed to understand why some clinics improved immediately and some clinics did not see appreciable improvement across the intervention phase (See figure, Supplemental Digital Content, available at http://links.lww.com/PQ9/A137).

Practices anecdotally identified separating adolescents from their adult caregivers as a challenge in depression screening. Although considered best practice, participants reported that separating patients and their caregivers was less commonly used and less commonly accepted for younger adolescents. One practice stated, “We never would have screened or even considered depression in a 12 year-old, and we found two 12-year-old patients last week with depression, one of whom was suicidal.” Solutions to privacy concerns included administering the PHQ-9M while caregivers were completing insurance forms, or when patients privately had vitals, hearing tests, vision tests, or anthropometric measurements. Many practices reduced inefficiencies and risk of error by developing previsit screening protocols to identify patients who would need the PHQ-9M. Others screened all adolescent patients for depression at every visit, thus making the process more standardized. Anecdotally, practices that reported screening at all adolescent visits did not report detrimental impacts on patient flow. In addition, many practices notified caregivers in advance that this type of screening, and for alcohol, tobacco, illicit drugs, and sexual activity, would occur during health supervision visits. Future work on improving detection and diagnosis for similarly sensitive conditions should consider the importance of integrating privacy interventions.

Limitations of this study include the concern that practices enrolled in a QIC are unlikely to be representative of all pediatric practices. Further, 11 of the 43 practices randomized withdrew before study implementation due to data collection burden. All of these practices withdrew before attempting to change their clinic processes and behaviors, making it unclear if easier data collection would have reduced this attrition rate. The study does not have information on practices that dropped out either for an intention to treat analysis or to compare demographics as these practices did not submit any data. It is possible that practices with more resources or abilities to collect needed data, may be less likely to resemble other general pediatric practices, although we believe this to be less likely as our cohort included great diversity with single practitioner private practices and large academic practices with many residents and attending. In addition, because the research team performed no direct site visits, there was potential variability in the application of data definitions across practices. However, the research team was available to answer questions during all data collection phases, hosted review sessions, and shared tips frequently on the listserv. To facilitate pragmatic data collection, the adolescent depression primary outcome measure assumes an underdiagnosis of adolescent depression, which is probable in light of current literature.16–18 The research study is unable to confirm the true incidence of adolescent depression in these practices, only that its diagnosis increased significantly and the percent of providers addressing adolescent depression at health supervision visits significantly increased. It is possible that even more adolescents during the intervention had diagnosable depression; this may be true as only 70% of practices were consistently screening 90% of patients with the PHQ-9M at the end of the intervention phase. It is worth noting that the frequency of adolescents diagnosed with depression in these clinics at the end of the project was around 10%, which is comparable with the frequency in large US depression studies.7 We are unable to comment on which piece of the QIC intervention was most important for the increase in adolescent depression recognition and diagnosis. Finally, practices were asked to evaluate the first 17 patients’ charts that met inclusion criteria from each month. Although this is not a randomized assignment for chart review, it does reduce the potential for biased chart sampling as it is a systematic sample of patients.

CONCLUSIONS

A national group of pediatric practices increased diagnoses of adolescent depression and sustained that improvement over 16 months. Future research should focus on spreading this effort to all pediatric primary care clinics, on the outcomes of patients following these diagnoses, and whether this model can apply to other adolescent and adult mental health diagnoses in primary care.

ACKNOWLEDGMENTS

The authors would like to gratefully acknowledge the contribution of all pediatric primary care sites who participated in this work and Drs. Brady, Adelman, Kairys, and Lehmann, Ms. Norton, the AAPs’ QuIIN, and Dr. Dadlez, Orringer, and Helms. We would also like to thank the REACH PPP program for allowing us to adapt their adolescent depression teaching materials.

REFERENCES

1. Center for Behavioral Health Statistics and Quality. 2015. Behavioral health trends in the United States: Results from the 2014 National Survey on Drug Use andHealth (HHS Publication No. SMA 15-4927, NSDUH Series H-50). Retrieved from https://www.samhsa.gov/data/sites/default/files/NSDUH-FRR1-2014/NSDUH-FRR1-2014.pdf.
2. Avenevoli S, Swendsen J, He JP, et al. Major depression in the national comorbidity survey-adolescent supplement: prevalence, correlates, and treatment. J Am Acad Child Adolesc Psychiatry. 2015;54(1):3744 e32.
3. Garrison CZ, Addy CL, Jackson KL, et al. Major depressive disorder and dysthymia in young adolescents. Am J Epidemiol. 1992;135:792802.
4. Whitaker A, Johnson J, Shaffer D, et al. Uncommon troubles in young people: prevalence estimates of selected psychiatric disorders in a nonreferred adolescent population. Arch Gen Psychiatry. 1990;47:487496.
5. Lewinsohn PM, Hops H, Roberts RE, et al. Adolescent psychopathology: I. prevalence and incidence of depression and other DSM-III-R disorders in high school students. J Abnorm Psychol. 1993;102:133144.
6. Lewinsohn PM, Rohde P, Klein DN, et al. Natural course of adolescent major depressive disorder: I. continuity into young adulthood. J Am Acad Child Adolesc Psychiatry. 1999;38:5663.
7. Mojtabai R, Olfson M, Han B. National trends in the prevalence and treatment of depression in adolescents and young adults. Pediatrics. 2016;138(6):e20161878.
8. Eaton DK, Kann L, Kinchen S, et al.; Centers for Disease Control and Prevention (CDC). Youth risk behavior surveillance - United States, 2011. MMWR Surveill Summ. 2012;61:1162.
9. Kann L, McManus T, Harris WA, et al. Youth risk behavior surveillance - United States, 2017. MMWR Surveill Summ. 2018;67:1114.
10. Beers LS, Godoy L, John T, et al. Mental health screening quality improvement learning collaborative in pediatric primary care. Pediatrics. 2017;140(6):e20162966.
11. Siu AL; US Preventive Services Task Force. Screening for depression in children and adolescents: US preventive services task force recommendation statement. Pediatrics. 2016;137:e20154467.
12. Zuckerbrot RA, Cheung AH, Jensen PS, et al.; GLAD-PC Steering Group. Guidelines for adolescent depression in primary care (GLAD-PC): I. identification, assessment, and initial management. Pediatrics. 2007;120:e1299e1312.
13. Geoffrey RS, Cynthia B, Graham AB 3rd, et al. 2014 recommendations for pediatric preventive health care. Pediatrics. 2014;133(3):568570.
14. McCarty CA, Russo J, Grossman DC, et al. Adolescents with suicidal ideation: health care use and functioning. Acad Pediatr. 2011;11:422426.
15. Luoma JB, Martin CE, Pearson JL. Contact with mental health and primary care providers before suicide: a review of the evidence. Am J Psychiatry. 2002;159:909916.
16. Glazebrook C, Hollis C, Heussler H, et al. Detecting emotional and behavioural problems in paediatric clinics. Child Care Health Dev. 2003;29:141149.
17. Zuckerbrot RA, Jensen PS. Improving recognition of adolescent depression in primary care. Arch Pediatr Adolesc Med. 2006;160:694704.
18. Mayne SL, Ross ME, Song L, et al. Variations in mental health diagnosis and prescribing across pediatric primary care practices. Pediatrics. 2016;137(5):e20152974.
19. Chang G, Warner V, Weissman MM. Physicians’ recognition of psychiatric disorders in children and adolescents. Am J Dis Child. 1988;142:736739.
20. Kramer T, Garralda ME. Psychiatric disorders in adolescents in primary care. Br J Psychiatry. 1998;173:508513.
21. Olson AL, Kelleher KJ, Kemper KJ, et al. Primary care pediatricians’ roles and perceived responsibilities in the identification and management of depression in children and adolescents. Ambul Pediatr. 2001;1:9198.
22. Leaf PJ, Alegria M, Cohen P, et al. Mental health service use in the community and schools: results from the four-community MECA Study. Methods for the epidemiology of child and adolescent mental disorders Study. J Am Acad Child Adolesc Psychiatry. 1996;35:889897.
23. Thapar A, Collishaw S, Pine DS, et al. Depression in adolescence. Lancet. 2012;379:10561067.
24. March J, Silva S, Petrycki S, et al.; Treatment for Adolescents with Depression Study (TADS) Team. Fluoxetine, cognitive-behavioral therapy, and their combination for adolescents with depression: Treatment for Adolescents with Depression Study (TADS) randomized controlled trial. JAMA. 2004;292:807820.
25. Neufeld SAS, Dunn VJ, Jones PB, et al. Reduction in adolescent depression after contact with mental health services: a longitudinal cohort study in the UK. Lancet Psychiatry. 2017;4:120127.
26. Rinke ML, Singh H, Ruberman S, et al. Primary care pediatricians’ interest in diagnostic error reduction. Diagnosis (Berl). 2016;3:6569.
27. Miller MR, Niedner MF, Huskins WC, et al.; National Association of Children’s Hospitals and Related Institutions Pediatric Intensive Care Unit Central Line-Associated Bloodstream Infection Quality Transformation Teams. Reducing PICU central line-associated bloodstream infections: 3-year results. Pediatrics. 2011;128:e1077e1083.
28. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006;355:27252732.
29. Nadeem E, Olin SS, Hill LC, et al. Understanding the components of quality improvement collaboratives: a systematic literature review. Milbank Q. 2013;91:354394.
30. Hinton CF, Neuspiel DR, Gubernick RS, et al. Improving newborn screening follow-up in pediatric practices: quality improvement innovation network. Pediatrics. 2012;130:e669e675.
31. Hulscher ME, Schouten LM, Grol RP, et al. Determinants of success of quality improvement collaboratives: what does the literature show? BMJ Qual Saf. 2013;22:1931.
32. Schouten LM, Hulscher ME, van Everdingen JJ, et al. Evidence for the impact of quality improvement collaboratives: systematic review. BMJ. 2008;336:14911494.
33. Rinke ML, Singh H, Heo M, et al. Diagnostic errors in primary care pediatrics: project RedDE. Acad Pediatr. 2018;18:220227.
34. Bundy DG, Singh H, Stein RE, et al. The design and conduct of project RedDE: a cluster-randomized trial to reduce diagnostic errors in pediatric primary care. Clin Trials. 2019;16(2):154164. 1740774518820522.
35. Greevy R, Lu B, Silber JH, et al. Optimal multivariate matching before randomization. Biostatistics. 2004;5:263275.
36. Rinke ML, Chen AR, Bundy DG, et al. Implementation of a central line maintenance care bundle in hospitalized pediatric oncology patients. Pediatrics. 2012;130:e996e1004.
37. Rinke ML, Bundy DG, Chen AR, et al. Central line maintenance bundles and CLABSIs in ambulatory oncology patients. Pediatrics. 2013;132:e1403e1412.
38. The Reach Institute. Patient-centered Mental Health in Pediatric Primary Care. 2018. Available at http://www.thereachinstitute.org/services/for-primary-care-practitioners/primary-pediatric-psychopharmacology-1. Accessed June 14, 2018.
39. Rinke ML. Toolkit for Reducing Diagnostic Errors in Primary Care - Project RedDE! 2018. Available at https://www.aap.org/en-us/professional-resources/quality-improvement/Project-RedDE/Pages/Project-RedDE.aspx. Accessed November 15, 2018.
40. Bridge JA, Iyengar S, Salary CB, et al. Clinical response and risk for reported suicidal ideation and suicide attempts in pediatric antidepressant treatment: a meta-analysis of randomized controlled trials. JAMA. 2007;297:16831696.
41. Weisz JR, McCarty CA, Valeri SM. Effects of psychotherapy for depression in children and adolescents: a meta-analysis. Psychol Bull. 2006;132:132149.
42. Sadek N, Bona J. Subsyndromal symptomatic depression: a new concept. Depress Anxiety. 2000;12:3039.
43. Goodman E, Slap GB, Huang B. The public health impact of socioeconomic status on adolescent depression and obesity. Am J Public Health. 2003;93:18441850.
44. Piccardi C, Detollenaere J, Vanden Bussche P, et al. Social disparities in patient safety in primary care: a systematic review. Int J Equity Health. 2018;17:114.
45. Nelson LS. The Shewhart control chart-tests for special causes. J Qual Technol. 1984;16(4):237239.

Supplemental Digital Content

Copyright © 2019 the Author(s). Published by Wolters Kluwer Health, Inc.