INTRODUCTION
Cardiovascular disease (CVD) is the leading cause of death worldwide.1,2 Hypertension, defined as a sustained elevation in blood pressure (BP), is one of the strongest CVD risk factors. Studies have demonstrated that childhood elevated BP and hypertension persist into adulthood3–5 because children with a single BP measurement >90th percentile are 2.4 times more likely to have an adult BP >90th percentile, and preventing pediatric elevated BP could eliminate 10% of adult elevated BP.3,5 These data have led to a paradigm shift from primary adult prevention to primary and even primordial childhood prevention.6,7 Early recognition and treatment of the 3.5% of children with hypertension8 are essential to decrease the substantial adult CVD burden.
Diagnosing hypertension in childhood depends first on recognizing elevated BPs, which are systolic or diastolic values ≥120/80 or ≥90th percentile for a child’s age, height, and sex.9 Due to this variability, pediatric providers frequently misdiagnose elevated BPs as normal.10–12 Over 50% of pediatric patients with elevated BP were misclassified as having normal BP and, therefore, did not receive an appropriate provider action.12 Reducing high-frequency but subacute diagnostic errors, such as missed elevated BP, is of strong interest to pediatricians, but less than a third are involved in efforts to reduce missed elevated BP.13
Our objective, via a prospective, stepped-wedge, cluster-randomized controlled trial in a national cohort of pediatric primary care clinics, was to determine whether a quality improvement collaborative (QIC) intervention could reduce the frequency of missed elevated pediatric BP and sustain reductions while practices refocused on reducing other errors. Recognizing a patient with an elevated BP is a crucial first step in diagnosing hypertension, and lessons from this project can be broadly applied and serve as a foundation for hypertension care delivery improvement efforts.
METHODS
As described previously,12,14 Project RedDE (Reducing Diagnostic Errors in Pediatric Primary Care) aimed to reduce 3 different diagnostic errors in primary care pediatric practices in collaboration with the American Academy of Pediatrics’ (AAP) Quality Improvement Innovation Networks via a QIC. Quality Improvement Innovation Networks aims to “improve the quality and value of care and outcomes for children and families” via quality improvement (QI) networks. QICs are an organized, multifaceted approach to QI with (1) a specific topic for improvement with large variation in current practice; (2) clinical and QI experts sharing best practice knowledge; (3) multidisciplinary teams from multiple sites willing to improve care; (4) a model for improvement with measurable targets for improvement, data feedback to teams, and small tests of change; and (5) a series of structured activities to advance improvement, exchange ideas, and share experiences of participating teams.15–20 Reducing missed elevated pediatric BP was 1 of the 3 errors addressed by Project RedDE’s QIC.
Randomization
Figures 1, 2 describe the randomization and 2 recruitment waves in detail. Briefly, in March 2015, we recruited 34 pediatric practices via email listservs and orientation webinars and randomized them via computer random number generator in a nonblinded fashion, to 1 of 3 groups. We employed multivariate matching before randomization21 based on university affiliation, the presence of a self-reported prior record of working to reduce the target diagnostic errors, and total annual visits per total number of pediatricians or nurse practitioners in the clinic. Nine practices dropped out after randomization but before submitting data due to inability to collect data. Of the remaining 25 practices, 24 submitted complete project data through September 2017; one practice dropped out after 8 months when their lead physician left the practice. We included these practice’s data in analyses of the other 2 Project RedDE errors as they submitted data for those errors but not BP. Nine additional practices were recruited in December of 2015 to increase the size of the cohort and similarly randomized via computer random number generator in a nonblinded fashion. Of these, 2 practices dropped out after randomization, but before submitting data, also due to data burden; 2 other practices from a single care network merged into one team to boost their practice sample size. These 6 “Wave 2” teams participated alongside the 24 “Wave 1” teams. In this manner, we randomized 43 total practices and included 30 in the final analysis.
Fig. 1.: Project RedDE timeline for missed elevated BP. aPractices were involved in Project RedDE during this time but working exclusively on the 2 non-BP errors. Practices in groups 2 and 3 had already worked to reduce 1 or 2 other diagnostic errors, respectively, before beginning to work on BP errors. bDuring the sustain and maintenance phases, practices began working to reduce a second and third diagnostic error, respectively. cWave 2 practices integrated alongside Wave 1 practices, intervening first on Wave 1’s second diagnostic error. These practices never intervened on a third diagnostic error. BP indicates blood pressure; RedDE, Reducing Diagnostic Errors in Primary Care Pediatrics.
Study Design
In July 2015, each of the 3 groups was assigned to collect retrospective baseline data (February–June 2015) on 1 of 3 diagnostic errors: missed elevated BP, delayed diagnosis of abnormal laboratory values, or missed diagnosis of adolescent depression.12 A priori, each error was to be examined independently. The groups collected 1 month of prospective baseline data (September 2015) and then began an 8-month QI action period in October of 2015 to reduce their assigned error. Concurrently (September 2015–May 2016), each group collected control data on a second diagnostic error. In a prospective, stepped-wedge fashion, after 8 months (June 2016), each group began to work to reduce a second diagnostic error during a second action period, sustain the improvement on their first error, and collect control data for the third diagnostic error. In February 2017, each group began to work to reduce the third error during a third action period, sustain the improvement on their second error, and maintain the improvement on their first error with reduced feedback and attention on the first error from the larger QIC (Fig. 1).
Using this design, each group of practices had a “control phase” where they collected data on BP errors but did not attempt to reduce them, and all but one Wave 2 group had an “intervention phase” where they actively worked to reduce BP errors. Two groups had a “sustain phase” where they actively worked to reduce a second diagnostic error and sustain improvement on BP errors; one group had a “maintenance phase” where they actively worked to reduce 2 other diagnostic errors and maintain improvement on BP errors.
Intervention
The primary intervention was a QIC. Each practice identified a 3-person QI team consisting of a physician, a nurse, and another professional (eg, administrator, business associate, front desk staff). After completing baseline data collection, teams participated in a 2-day video conference where they learned and practiced QI methodology and diagnostic error-specific content. Although all teams participated in the QIC video conference, only the teams about to intervene on missed elevated BP received information and training on this error. Following this, teams received rapid, transparent data feedback on performance with benchmarking, participated in monthly hour-long video conferences, and completed monthly mini-root cause analyses. These mini-root cause analyses examined a patient with a BP error in their clinic and 15 standardized patient and systems factors that could have led to this error.22,23 Teams focused their video conferences and mini-root causes analyses on missed elevated BP while in the BP intervention phase. The other 2 groups focused on other errors. Each practice had a QI coach provided by the project, and each group had an interactive email listserv and group-specific website with project resources. Day-long video conference learning sessions were conducted every 8 months as practices transitioned to working on a new diagnostic error (Fig. 1). When practices were working on their second diagnostic error, monthly video conferences provided transparent data feedback in the form of run charts from both their first and second diagnostic errors. When working on their third diagnostic error, monthly video conferences presented data from both their second and third diagnostic errors, and data from their first error were only presented quarterly. Practices could always access all of their data independently. We believe practices spent an average of 4 hours per month working on Project RedDE–related activities: 8-hour learning session every 8 months, 1-hour video conference every nonlearning session month, 1 hour for team QI meetings, and 1 hour for data entry and collection. Also, practices spent time developing and implementing changes, including new tools and workflows, where time spent cannot be easily estimated.
Project leadership developed a BP “change package” to help teams with (1) implementing a uniform BP measurement and screening process; (2) using systematic tools to identify patients with elevated BPs quickly; and (3) helping providers know, perform, and document appropriate actions when BPs were elevated. Each of these 3 domains had associated tools for improvement based on the United States’ National Heart, Lung, and Blood Institute9 and the AAP6 hypertension guidelines. To ensure staff employed the correct BP measurement technique, we used videos and instructional tools to teach providers how to position patients, choose the right cuff, and obtain BP measurements.24,25 As over half the participating teams were on the Epic electronic health record (EHR) platform (Epic Systems, Verona, WI); Epic pre-built tools (dot phrases) were shared and encouraged, such as “.bpfa” to automatically include BP percentiles in medical documentation. Non-Epic practices worked with their EHR vendor to incorporate similar tools and/or used available spreadsheets or smartphone apps to identify elevated BPs. Other tools included pocket BP guides, reminder posters for correct processes, visual cues such as heart magnets on patient doors when a repeat BP measurement was needed, and patient education materials. Finally, we shared information about diagnostic coding, billing tips, and recommended follow-up actions. All of these resources were maintained on the Project RedDE website and practices shared new and modified tools throughout the QIC. All resources were made available to the public following the project’s conclusion.26
Measures
We utilized pragmatic error measures with efficient data collection methods to accommodate the needs of high throughput practices. Inclusion criteria for the elevated BP diagnostic error measure were patients 3 years old or older through 22 years old who had an elevated systolic or diastolic BP recorded at their health supervision visit. We defined elevated BPs as ≥90th percentile for age, height, and sex or ≥120 mm Hg systolic or 80 mm Hg diastolic pressure at any age.9 The primary outcome measure was the number of patients with an elevated BP with an appropriate action taken by the provider per 100 patients with elevated BP. This provider “appropriate action” confirms that a diagnosis was made because not all providers document a diagnosis of “an elevated BP.” Appropriate actions included any of the following: (a) rechecking the BP; (b) noting a plan to recheck the BP at a future visit; (c) referral to a hypertension specialist (eg, pediatric cardiologist or nephrologist), and/or (d) laboratory or radiologic studies ordered to evaluate causes of elevated BP. More than one action could be selected, and actions had to occur within 30 days of the visit. Definitions of “appropriate actions” were necessarily broad as the study relied on front-line clinicians with limited time to collect data. A research-team chart review was beyond the scope of this work.
A secondary measure was the number of children with an elevated BP in whom the provider documented that the BP was elevated or documented an appropriate action as described above per 100 patients with elevated BP. This measure expands the primary outcome measure to include times when a pediatrician may recognize an elevated BP but intentionally or unintentionally not take a recommended action. For example, a provider may not take appropriate action with a patient with known white coat hypertension. Another secondary outcome measure included the number of elevated BP patients in whom the provider documented sex-, height-, and age-specific BP percentiles, an essential step in recognizing an elevated BP.9 This was important because pediatricians face challenges in interpreting normal versus abnormal BP values.11 Practices examined the first 10 patients each month who met inclusion criteria for the BP diagnostic error. If practices had less than 10 eligible patients in a given month, they entered all data available.
As a process measure, practices were asked to document the percent of patients older than 3 years of age attending health supervision visits who had their BP measured. Once practices demonstrated >90% compliance with the process measure for 2 consecutive months, they were exempt from reporting this measure.
Practices were taught to measure definitions via multiple webinars, slides, and written materials. Listservs and QI coaches were available for questions and clarifications. For each eligible patient, practices recorded age, sex, and insurance status (public, private, self, unknown) and entered data into a web-based portal. Insurance status was included as a potential confounder because it is an easily collectible, partial marker of socioeconomic status, which has previously been shown to be associated with errors in ambulatory care.27 Practice demographics, including items such as university affiliation, previous work on these errors, clinic and patient demographics, and QI skill, were identified via a self-report questionnaire before the start of the project.
Statistical Analysis
We used patients as the unit of analysis and compared the primary outcome of a mean number of patients with elevated BP with an appropriate provider action taken per 100 patients with an elevated BP between the intervention and control phases. The primary outcome effect measures are presented as model-based estimates of risk differences (RDs). We applied generalized mixed-effects logistic regression models adjusted for age, sex, insurance status, and wave with month-specific and practice-specific intercepts considered random, whereas age, sex, and insurance status were considered fixed. We excluded patients with incomplete demographic data from the final analysis. Power analyses were revised based on Wave 1 baseline data error rates.12 The minimally detectable RD effect size between control and intervention phases with >80% power at a 2-sided significance level = 0.05 was ≥9.1%. Similar models examined secondary outcomes and differences between the primary outcome in the intervention and sustained phases, and sustain and maintenance phases (Fig. 1). The latter investigated whether practices could sustain and/or maintain improvements while working on other diagnostic errors. Intervention versus control phase patient demographics and elevated BP actions taken were tested with Chi-square tests without clustering. We additionally examined aggregated primary and secondary outcomes using statistical process control p-charts, with Nelson Rules35. signifying changes. The intervention’s initiation was adjusted, so each group began the intervention on “month 1.” Small multiple p charts identified trends across groups and variation between clinics We completed all data analyses with SAS v9.3 (SAS Inc., Cary, NC, USA). This study was approved by the AAP’s and the Albert Einstein College of Medicine’s Institutional Review Boards.
RESULTS
Demographics of the 30 practices included in the primary analysis are presented in Table 1. Data on 1,728 patients were available for the control phase and 1,834 patients for the intervention phase (Fig. 2). We excluded 140 patients (3.9% of total patients) from the final analysis due to missing insurance data. Complete patient demographics are presented in Table 2.
Table 1.: Demographics of Analyzed Practices at Baseline: N (%)
Table 2.: Demographics of Included Patients and Actions Taken on Patients With Elevated BPs in Primary Analysis
Fig. 2.: Modified consort flow diagram for stepped-wedge trial. One practice in group 2 incorrectly entered control data. Their intervention data were included.*One practice in Group 2 incorrectly entered control data. Their Intervention data was included.
The model-based estimated mean percentage of patients with either elevated systolic or diastolic BP who received an appropriate action increased from 57.6% in the control phase to 73.5% in the intervention phase (RD, 16.0%; 95% CI, 12.3%–20.0%; P < 0.0001). Of the 1,366 intervention and 969 control patients who received an appropriate action, 84% had their BP rechecked in the intervention phase versus 75% in the control phase (P < 0.001); 27% had a plan to recheck BP at a future visit in the intervention phase versus 22% in the control phase (P = 0.004); and 3% had a referral to a specialist in the intervention phase versus 7% in the control phase (P < 0.001; Table 2).
Practices continued to improve comparing the intervention and sustain phases (RD, 5.2%; 95% CI, 1.5%–8.9%; P = 0.006) and neither worsened nor improved comparing the maintenance and sustain phases (RD, 0.9%; 95% CI, −4.7% to 6.6%; P = 0.743; Table 3).
Table 3.: Primary and Secondary Outcome Results
The Supplemental Figures (available at https://links.lww.com/PQ9/A134 and https://links.lww.com/PQ9/A135) demonstrate a significant shift that began with the first intervention month and a second shift that began with the sixth intervention month for the primary outcome. Variation is observed in the small multiples p charts between and within groups, with some clinics still not at 80% for the primary outcome by the conclusion of the intervention and some at 80% in the baseline period.
Secondary outcomes demonstrated an increase in documentation of elevated BP or appropriate action taken (RD, 14.0%; 95% CI, 10.3%–17.7%; P < 0.0001) and an increase in documentation of BP percentiles (RD, 20.1%; 95% CI, 16.2%–24.1%; P < 0.0001) comparing the intervention versus control phases. Documentation of elevated BP or “appropriate action taken” continued to improve in the sustain phase but not the maintenance phase, whereas documentation of BP percentiles improved in both sustain and maintenance phases (Table 3). More than 90% of practices successfully met the process measure of measuring BP in >90% of patients 3 years old or older by the second month of their intervention phase.
DISCUSSION
In one of the first cluster-randomized, stepped-wedge trials to address diagnostic error, a national QIC intervention increased recognition of elevated BP in primary care pediatrics by 28% from baseline; an increase that was sustained for 16 months even when practices began focusing QI efforts elsewhere. Practices were also able to improve the documentation of BP percentiles in their EHR, a key first step in recognizing a child has elevated BP. Practice retention was high following initial attrition due to data collection burden. This QIC strategy can potentially serve as a model for future diagnostic error reduction research and implementation initiatives in other clinical domains.
Although missing elevated BP is not immediately dangerous, this error is high frequency12; primary care pediatricians would like to see it prioritized13; and it eventually takes a significant toll on pediatric and adult health.6,7 Reducing common and potentially harmful errors through collaboration, data benchmarking, QI coaching, and mini-root cause analyses offers one possible path for diagnostic errors, where few interventions focus on pediatric and ambulatory patients.28 It is unclear if improvement came from the bundle of intervention tools provided to the practices or the focus on elevated BP that came from being part of a national QIC. By focusing on just one error at a time, practices would also likely experience less data collection burden, reducing the risk of attrition. The effect size of results seen in Project RedDE is comparable to previous QIC results,19 especially when considering studies with a comparable initial prevalence of errors.29 QICs are often resource intensive, and Project RedDE demonstrates a benchmark for what practices across the country can achieve with dedicated focus and collaboration. The low attrition rate once practices were able to demonstrate data collection capacity (1 out of 31 practices) suggests the burden of participating and working to improve these errors was not overwhelming, and practices found value in this work
A common challenge identified by practices was using EHR systems designed for adults with pediatric patients. Although many of the practices’ EHRs would flag “abnormal BPs,” alerts were often based on adult norms rather than pediatric values. Lack of pediatric-specific content in EHR systems is well recognized and impacts the quality of pediatric care.30,31 A focus on pediatric-specific EHR content, coupled with the evidence provided through Project RedDE, enabled many practices to work with vendors to introduce pediatric-specific BP percentiles and alerts. Our work suggests the importance of requiring EHR vendors to utilize pediatric-specific ranges and norms, despite the appreciably smaller pediatric healthcare population.32,33 Also, practices modified note templates to prompt providers to acknowledge abnormal BP percentiles so that clinicians could take appropriate actions.
Pediatric BP measurement is a key area for intervention,34 which was not specifically measured by this work. Many staff were unaware of the limitations of oscillometric automated BP measurement devices, and of the impact that incorrect patient preparation and positioning has on BP measurement.9 Lack of adherence to essential preparatory steps is likely widespread and has appreciable implications regarding the diagnosis of elevated BP in children. During the control phases, inappropriate BP measurement likely led to false-positive elevated BP measurements, and meaningless alerts likely contributed to alert fatigue.32 Anecdotally, many practices reported fewer patients with elevated BP once staff were better trained at measuring BP and therefore had fewer false-positive alerts. Additionally, a smaller percentage of patients were referred to specialists in the intervention phase, potentially suggesting decreased unnecessary healthcare utilization for false-positive elevated BP measurements.
The study has several limitations. Practices enrolled in a QIC are likely not representative of all practices given heightened interest in errors and QI, and we are unable to comment on practices that received recruitment emails or attended orientation webinars but did not choose to participate. Further, appropriate actions for elevated BP were purposely broad, suggesting that some actions would be considered insufficient or incorrect if examined more closely. These biases may contribute to an underestimation of true diagnostic error rates in the Project RedDE cohort when compared with other practices. This study was able to focus only on the first step in the hypertension diagnostic process, diagnosing an elevated BP.
Further, 11 of the 43 practices randomized withdrew due to data collection burden before attempting to change their clinic processes. It is unclear if easier data collection would have reduced this attrition rate. Demographics data or any BP measures were not submitted by practices that dropped out, and therefore, an intention-to-treat analysis or comparison between participating and nonparticipating practice demographics was not possible. Similarly, no BP-specific balancing measures were collected because of the concern that practices would be overwhelmed with data collection burden.
Additionally, no direct chart review verifications were performed so there could be variability in the application of data definitions. The research team answered questions and shared clarifications regarding data collection. Finally, practices were asked to evaluate the first 10 patients’ charts that met inclusion criteria monthly. Although this is not a randomized assignment, it does reduce the potential for biased chart sampling when compared with convenience samples.
CONCLUDING SUMMARY
Implementation of a QIC in a national group of pediatric practices reduced missed diagnoses of pediatric elevated BP and sustained that reduction. Future work should focus on using similar approaches to improve the diagnosis of pediatric hypertension and quantify patient outcomes.
ACKNOWLEDGMENTS
The authors gratefully acknowledge the contribution of all pediatric primary care sites who participated in this work and Dr. O’Donnell, Dr. Adelman, Dr. Stein, Dr. Lehmann, Dr. Lilienfeld, Ms. Norton, American Academy of Pediatrics’ Quality Improvement and Innovations Network, and Dr. Helms for assistance with the study.
DISCLOSURE
The authors have no financial interest to declare in relation to the content of this article.
REFERENCES
1. National Center for Health Statistics. Health, United States, 2015: With Special Feature on Racial and Ethnic Health Disparities. 2016.Hyattsville, MD.
2. WHO. The top 10 causes of death. 2014.
http://www.who.int/mediacentre/factsheets/fs310/en/. Accessed July 8, 2016.
3. Lauer RM, Clarke WR. Childhood risk factors for high adult blood pressure: the Muscatine Study. Pediatrics. 1989;84:633641.
4. Shear CL, Burke GL, Freedman DS, et al. Value of childhood blood pressure measurements and family history in predicting future blood pressure status: results from 8 years of follow-up in the Bogalusa Heart Study. Pediatrics. 1986;77:862869.
5. Kelly RK, Thomson R, Smith KJ, et al. Factors affecting tracking of blood pressure from childhood to adulthood: the Childhood Determinants of Adult Health Study. J Pediatr. 2015;167:14228.e2.
6. Expert Panel on Integrated Guidelines for Cardiovascular Health and Risk Reduction in Children and Adolescents; National Heart, Lung, and Blood Institute. Expert panel on integrated guidelines for cardiovascular health and risk reduction in children and adolescents: summary report. Pediatrics. 2011;128(suppl 5): S213S256.
7. Lloyd-Jones DM, Hong Y, Labarthe D, et al.; American Heart Association Strategic Planning Task Force and Statistics Committee. Defining and setting national goals for cardiovascular health promotion and disease reduction: the American Heart Association’s strategic Impact Goal through 2020 and beyond. Circulation. 2010;121:586613.
8. Flynn JT, Kaelber DC, Baker-Smith CM, et al.; Subcommittee on Screening and Management of High Blood Pressure in Children. Clinical practice guideline for screening and management of high blood pressure in children and adolescents. Pediatrics. 2017;140:e20171904.
9. Falkner B, Daniels SR. Summary of the fourth report on the diagnosis, evaluation, and treatment of high blood pressure in children and adolescents. Hypertension. 2004;44:387388.
10. Brady TM, Solomon BS, Neu AM, et al. Patient-, provider-, and clinic-level predictors of unrecognized elevated blood pressure in children. Pediatrics. 2010;125:e1286e1293.
11. Bijlsma MW, Blufpand HN, Kaspers GJ, et al. Why pediatricians fail to diagnose hypertension: a multicenter survey. J Pediatr. 2014;164:173177.e7.
12. Rinke ML, Singh H, Heo M, et al. Diagnostic Errors in Primary Care Pediatrics: Project RedDE. Acad Pediatr. 2018;18:220227.
13. Rinke ML, Singh H, Ruberman S, et al. Primary care pediatricians’ interest in diagnostic error reduction. Diagnosis (Berl). 2016;3:6569.
14. Bundy DG, Singh H, Stein RE, et al. The design and conduct of Project RedDE: A cluster-randomized trial to reduce diagnostic errors in pediatric primary care. Clin Trials. 2019;16:154164.
15. Miller MR, Niedner MF, Huskins WC, et al.; National Association of Children’s Hospitals and Related Institutions Pediatric Intensive Care Unit Central Line-Associated Bloodstream Infection Quality Transformation Teams. Reducing PICU central line-associated bloodstream infections: 3-year results. Pediatrics. 2011;128:e1077e1083.
16. Pronovost P, Needham D, Berenholtz S, et al. An intervention to decrease catheter-related bloodstream infections in the ICU. N Engl J Med. 2006;355:27252732.
17. Nadeem E, Olin SS, Hill LC, et al. Understanding the components of quality improvement collaboratives: a systematic literature review. Milbank Q. 2013;91:354394.
18. Hulscher ME, Schouten LM, Grol RP, et al. Determinants of success of quality improvement collaboratives: what does the literature show? BMJ Qual Saf. 2013;22:1931.
19. Schouten LM, Hulscher ME, van Everdingen JJ, et al. Evidence for the impact of quality improvement collaboratives: systematic review. BMJ. 2008;336:14911494.
20. Beers LS, Godoy L, John T, et al. Mental health screening quality improvement learning collaborative in pediatric primary care. Pediatrics. 2017;140:e20162966.
21. Greevy R, Lu B, Silber JH, et al. Optimal multivariate matching before randomization. Biostatistics. 2004;5:263275.
22. Rinke ML, Chen AR, Bundy DG, et al. Implementation of a central line maintenance care bundle in hospitalized pediatric oncology patients. Pediatrics. 2012;130:e996e1004.
23. Rinke ML, Bundy DG, Chen AR, et al. Central line maintenance bundles and CLABSIs in ambulatory oncology patients. Pediatrics. 2013;132:e1403e1412.
24. Williams JS, Brown SM, Conlin PR. Videos in clinical medicine. Blood-pressure measurement. N Engl J Med. 2009;360:e6.
26. Rinke ML. Toolkit for Reducing Diagnostic Errors in Primary Care - Project RedDE! 2018;
https://www.aap.org/en-us/professional-resources/quality-improvement/Project-RedDE/Pages/Project-RedDE.aspx. Accessed November 15, 2018.
27. Piccardi C, Detollenaere J, Vanden Bussche P, et al. Social disparities in patient safety in primary care: a systematic review. Int J Equity Health. 2018;17:114.
28. McDonald KM, Matesic B, Contopoulos-Ioannidis DG, et al. Patient safety strategies targeted at diagnostic errors: a systematic review. Ann Intern Med. 2013;158(5 pt 2):381389.
29. Benedetti R, Flock B, Pedersen S, et al. Improved clinical outcomes for fee-for-service physician practices participating in a diabetes care collaborative. Jt Comm J Qual Saf. 2004;30:187194.
30. Anoshiravani A, Gaskin GL, Groshek MR, et al. Special requirements for electronic medical records in adolescent medicine. J Adolesc Health. 2012;51:409414.
31. Spooner SA; Council on Clinical Information Technology, American Academy of Pediatrics. Special requirements of electronic health record systems in pediatrics. Pediatrics. 2007;119:631637.
32. Lehmann CU; Council on Clinical Information Technology. Pediatric aspects of inpatient health information technology systems. Pediatrics. 2015;135:e756e768.
33. Johnson KB, Lehmann CU; Council on Clinical Information Technology of the American Academy of Pediatrics. Electronic prescribing in pediatrics: toward safer and more effective medication management. Pediatrics. 2013;131:e1350e1356.
34. Rakotz MK, Townsend RR, Yang J, et al. Medical students and measuring blood pressure: Results from the American Medical Association Blood Pressure Check Challenge. J Clin Hypertens (Greenwich). 2017;19:614619.
35. Nelson LS. The Shewhart Control Chart-Tests for Special Causes. J Qual Technol. 1984;16(4):237239.