Secondary Logo

Harnessing Implementation Science to Increase the Impact of Health Equity Research

Chinman, Matthew PhD*; Woodward, Eva N. PhD†,‡; Curran, Geoffrey M. PhD†,§; Hausmann, Leslie R.M. PhD*,∥

doi: 10.1097/MLR.0000000000000769
Perspectives
Free

Background: Health disparities are differences in health or health care between groups based on social, economic, and/or environmental disadvantage. Disparity research often follows 3 steps: detecting (phase 1), understanding (phase 2), and reducing (phase 3), disparities. Although disparities have narrowed over time, many remain.

Objectives: We argue that implementation science could enhance disparities research by broadening the scope of phase 2 studies and offering rigorous methods to test disparity-reducing implementation strategies in phase 3 studies.

Methods: We briefly review the focus of phase 2 and phase 3 disparities research. We then provide a decision tree and case examples to illustrate how implementation science frameworks and research designs could further enhance disparity research.

Results: Most health disparities research emphasizes patient and provider factors as predominant mechanisms underlying disparities. Applying implementation science frameworks like the Consolidated Framework for Implementation Research could help disparities research widen its scope in phase 2 studies and, in turn, develop broader disparities-reducing implementation strategies in phase 3 studies. Many phase 3 studies of disparity-reducing implementation strategies are similar to case studies, whose designs are not able to fully test causality. Implementation science research designs offer rigorous methods that could accelerate the pace at which equity is achieved in real-world practice.

Conclusions: Disparities can be considered a “special case” of implementation challenges—when evidence-based clinical interventions are delivered to, and received by, vulnerable populations at lower rates. Bringing together health disparities research and implementation science could advance equity more than either could achieve on their own.

*Veterans Affairs Pittsburgh Healthcare System, VA Center for Health Equity Research and Promotion (CHERP), Pittsburgh, PA

Central Arkansas Veterans Healthcare System, South Central Mental Illness Research Education and Clinical Center (MIRECC)

Department of Psychiatry, University of Arkansas for Medical Sciences

§Center for Implementation Research, University of Arkansas for Medical Sciences, Little Rock, AR

Department of Medicine, Division of General Internal Medicine, University of Pittsburgh School of Medicine, Pittsburgh, PA

Supported by a grant from VA Health Services Research & Development (CIN 13-405; PI, Fine) and by the Office of Academic Affiliations, Advanced Fellowship Program in Mental Illness Research and Treatment, Department of Veterans Affairs. The views expressed here are those of the authors and do not represent those of the Department of Veterans Affairs or the United States Government.

The authors declare no conflict of interest.

Reprints: Matthew Chinman, PhD, Center for Health Equity Research and Promotion (CHERP), VA Pittsburgh Healthcare System, Research Office Building (151R), University Drive C, Pittsburgh, PA 15240. E-mail: chinman@rand.org.

Health disparities are significant differences in health or health care between ≥2 groups based on social, economic, and/or environmental disadvantage.1 Groups that experience worse health or health care, or “vulnerable” populations,”2 can be defined by race/ethnicity, sexual orientation, sex identity, socioeconomic status, functional limitation, or several other sociodemographic or clinical characteristics. As the Institute of Medicine released their seminal report on racial and ethnic disparities in health care,3 an abundance of research has been conducted to identify,4 understand,5 and develop strategies6 to reduce disparities across many vulnerable populations. However, we observe that much of the existing research on health disparities focuses on patient and provider factors as predominant underlying mechanisms. In turn, most existing strategies to reduce disparities target patients and/or providers7 and rarely target system-level factors, even though they likely contribute to health disparities.3,8

As echoed by disparity researchers,5,8–12 broadening the scope of disparities research to include larger ecological levels (eg, clinics, hospitals, systems) and factors that indirectly influence health care delivery (eg, leadership support for a focus on equity) could open additional research avenues that yield better equity outcomes. Overall, there are relatively few efficacious and effective implementation strategies that have been tested for their impact on disparities, as documented in multiple literature reviews commissioned by Robert Wood Johnson Foundation’s Finding Answers initiative.13–17 Although there has been a recent surge in research testing disparity-reducing interventions conceptualized across multiple ecological levels,18–20 much of that research has not been randomized trials or designs that allow for tests of causality. Here, we argue that implementation science offers methods to rigorously test these newer interventions to assess if they should be replicated more broadly.

Implementation science is defined as “the scientific study of methods to promote the systematic uptake of research findings and other evidence-based practices into routine practice, and, hence, to improve the quality and effectiveness of health services.”21(p1) With its inherent focus on system change, implementation science routinely addresses how system-level factors facilitate or impede adoption of those practices, in addition to patient and provider factors.22 Although not often made explicit, implementation scientists and disparities researchers both work toward a common goal—to ensure that all patients receive the highest quality health care supported by research evidence. We argue that implementation science could enhance disparities research by broadening the scope of studies that attempt to explain disparities and by offering methods to test disparity-reducing strategies. We focus this paper on implementation of evidence-based clinical interventions delivered in health care systems, defined as any clinical or therapeutic practice, delivery system, or health prevention or promotion activity intended to improve health care outcomes.23 In the remainder of the paper, we use the term implementation strategy to refer to methods or techniques used to enhance the “adoption, implementation, and sustainability” of a clinical intervention.24

Back to Top | Article Outline

EXPANDING THE FOCUS OF DISPARITIES RESEARCH

Disparity research often follows a 3-step pipeline—detecting disparities (phase 1), exploring underlying mechanisms (phase 2), and developing and evaluating interventions to reduce disparities (phase 3).2,5 Researchers have made significant strides detecting disparities through phase 1 studies.4,25,26 We argue that incorporating implementation science frameworks into phase 2 and 3 study designs could broaden the health disparities research agenda and increase the impact of individual studies.

A limitation of many phase 2 studies is that they tend to focus on patient factors, provider factors8 (eg, communication style, bias5,27), or structural factors that may be difficult to change (fee for service vs. Health Maintenance Organization). As an example, studies examining health disparities in cancer have focused almost exclusively on patient factors, such as socioeconomic status, negative health behaviors, and carcinogenic exposure.28 The National Healthcare Disparities Report has shifted from framing mechanisms of disparities primarily at the patient level in 200329 to highlighting some system-level factors since 2010 in a recurring chapter entitled “Health system infrastructure.” However, existing disparities research studies often miss the contribution of the health care system itself in unintentionally creating or maintaining disparities (eg, lack of leadership, data systems, or organizational climate needed to address disparities).

As phase 2 studies tend to focus on patient and provider factors, phase 3 studies naturally tend to include disparity-reducing implementation strategies that target individual patients or providers. For example, a review of 400 disparities studies found that, “most efforts were directed at patients in the form of education or training.”8(p995) That is because, as Diez Roux30 points out, “Conceptual models have a profound impact on what is studied, how it is studied, and how results are interpreted.”30(p54) Thus, dominant models such as the Health Belief Model31 (emphasizes individuals rationally weighing the costs of performing a health action against the possible benefits) influences the development and testing of disparity-reducing strategies to be more patient focused. There have been growing calls by Saha et al,5 Chin et al,8,10 and others9,11,12 for the use of multilevel ecological models when using implementation strategies to reduce disparities. Although investigators have responded by conducting more multicomponent, multilevel intervention studies to reduce disparities, these studies are often organic to the specific context in which they are conducted and lack critical research design elements, such as control groups, which limits replicability.18,20,32

Consistent with Saha et al5 and Chin et al,8 we present 2 ways that implementation science can enhance disparity research. First, we describe how conceptual frameworks developed to guide implementation science take into account ecological factors across multiple levels, including the health care system, and could be used to expand the focus of health disparities research beyond patients and providers. Then, we illustrate how implementation science research designs and frameworks could be used to test disparity-reducing strategies in rigorous trials.

Back to Top | Article Outline

USING IMPLEMENTATION SCIENCE FRAMEWORKS TO GUIDE PHASE 2 DISPARITIES RESEARCH

Implementation science has developed a number of frameworks that aim to promote successful implementation of evidence-based clinical interventions.33 One such framework—the Consolidated Framework for Implementation Research (CFIR34)—describes 5 domains (and 39 subdomains) that can either facilitate or hinder implementation of a given clinical intervention. Given the extent of the literature from which it draws, CFIR is well-suited to exemplify how the focus of phase 2 disparities studies can be broadened.

The first CFIR domain is the intervention characteristics of the clinical intervention, which can influence its adoption (eg, strength of evidence, relative advantage, complexity, adaptability). Even effective clinical interventions that are complex, have unclear relative advantage, or cannot be easily adapted to a local setting will likely be adopted more slowly.35 The next 3 domains of the CFIR are the outer setting (eg, broader social, political, and economic context), inner setting in which the clinical intervention is implemented (eg, structural characteristics, relationships, implementation readiness), and the characteristics of the individuals involved in implementation (eg, knowledge, efficacy, skills). The final domain is the implementation process taken to facilitate a clinical intervention’s use. Although CFIR takes into account patient and provider influence on intervention uptake, it also clearly focuses on health care organization, community, and policy level influences, which health disparities researchers also recognize as critical when trying to reduce disparities.5,8

Using an implementation framework like CFIR to design phase 2 disparities research studies could reveal that certain factors at levels beyond patients and providers indirectly cause or maintain a disparity. For example, the lack of infrastructure to facilitate real-time tracking of disparities in delivery of a clinical intervention may not have directly caused a disparity, but its absence may help to maintain it and thus, could represent a potential area in which to intervene. If a phase 2 study did not focus on the impact of data monitoring as part of the assessment, it is unlikely that data monitoring would emerge as a target on which to focus in a phase 3 study. Although prior research has identified some patient, provider, and even community-level factors affecting uptake,10,11 factors have rarely been assessed from clinic and health care system sources (eg, medical directors, clinic managers). For example, a phase 2 study assessing factors contributing to the higher rates of cervical cancer in Ohio Appalachian regions conducted focus groups about the vaccine for human papillomavirus virus (a primary cause of cervical cancer) with young women, parents, community leaders, and health care providers. Following the social determinants of health model, authors found factors related to health care access, patient-provider communication, high teen pregnancy rates, cost, and lack of vaccine information to be important barriers to obtaining the vaccine. However, additional data could be collected on the influence of health care system factors by each CFIR domain:

  • Intervention characteristics—do the required multiple doses of the vaccine make it less convenient to administer for those in Appalachia who may have transportation challenges?
  • Outer settingdoes provider knowledge of the needs of patients from Appalachia influence the likelihood to provide the vaccine?
  • Inner settingdoes the health care system’s use/nonuse of real-time data on vaccine administration influence its distribution?
  • Characteristics of individualshow much does provider knowledge and efficacy on ways to engage Appalachian patients affect vaccine distribution?
  • Implementation process—to what extent is there a specific implementation plan to more broadly administer the vaccine?

The authors’ data collection tap into some of these areas, but expanding the inquiry and including additional stakeholders in the data collection, in this case clinical administrators, could provide additional information around which to structure implementation strategies. Table 1 has general examples of phase 2 research questions by CFIR domain.

TABLE 1

TABLE 1

Back to Top | Article Outline

INCORPORATING IMPLEMENTATION SCIENCE DESIGNS INTO DISPARITIES RESEARCH STUDIES

Once important implementation factors have been identified in phase 2 studies, phase 3 disparities studies could include aims in which selected implementation strategies are tested for their impact on the delivery of effective clinical interventions, outcomes, and disparities. In implementation science, use of these specific implementation strategies are themselves the subject of research trials. A recent expert panel project called Expert Recommendations for Implementing Change (ERIC) has identified an existing pool of 73 discrete implementation strategies sorted into 9 clusters (engage consumers, evaluation and feedback, changing infrastructure, tailoring clinical interventions to the setting, developing stakeholder relationships, financial strategies, and providing implementers with support) as a way to begin to standardize how these strategies are discussed.36 Disparity researchers can select and tailor implementation strategies from this existing pool as a way to promote consistency from study to study. Implementation scientists have begun to use a variety of methods to link findings from their needs assessments (ie, a phase 2 study in disparities research) to specific strategies to test in implementation trials (ie, a phase 3 study).37

Implementation science has developed rigorous research designs that could be used to test a wide range of implementation strategies in disparities research. First are “hybrid” designs,23 which assess effectiveness and implementation aims simultaneously, modifying the traditional efficacy-effectiveness-dissemination research pipeline to accelerate the pace at which evidence-based clinical interventions get used in real-world practice.38 These designs are in contrast to “pure implementation trials” which only measure implementation. As shown in Figure 1 and described below, each design can be applied to disparities research depending on the existing state of evidence for clinical interventions among certain vulnerable populations and knowledge of mechanisms driving disparities (note: letters in the figure correspond to the same letter in the text below).

FIGURE 1

FIGURE 1

Back to Top | Article Outline

Hybrid Type I and Formative Evaluation

The Hybrid Type I design is a traditional effectiveness trial of a clinical intervention accompanied by a formative evaluation that tracks various aspects of that clinical intervention’s implementation (eg, integrity, dose) which could affect its adoption.39 As shown in Figure 1, a Hybrid Type I design may be appropriate if disparities in a particular health outcome have been documented through phase 1 research (A) and effective clinical interventions to improve the target outcome exist (B), but there is a lack of evidence documenting whether the clinical intervention is effective for a certain vulnerable population (C). This scenario is common given the systematic underrepresentation of vulnerable groups in clinical trials.40–42 The Hybrid Type I design can be used by disparities researchers to ensure clinical interventions that are effective in the general population can be successfully extended to vulnerable populations. At the same time, adding formative evaluation to an effectiveness study allows researchers to learn more about what influences the real-world implementation of the clinical intervention in the vulnerable group, which can inform subsequent implementation studies. Such an addition is usually a modest investment that does not impede the full test of a clinical intervention’s effectiveness.

If a clinical intervention already has evidence of effectiveness in vulnerable groups (D), the next question is whether it is being delivered equally across groups. If not (E), and the factors that are influencing the lack of equitable implementation are unknown (F), then it would be appropriate to conduct just a formative evaluation. A formative evaluation guided by a comprehensive implementation framework such as CFIR would elucidate a wide range of factors that could influence successful delivery of the clinical intervention in the vulnerable group, which would inform the development and testing of implementation strategies to address the disparity in subsequent studies.

As an example, consider the racial disparity that exists in obstructive sleep apnea (OSA). Compared with whites, African Americans have OSA more frequently but get screened less often. The effectiveness of OSA screening across subgroups is clear; thus, a next step would be to conduct a formative evaluation to assess factors that facilitate or hinder screening for OSA among African Americans. Researchers at a community-based sleep center conducted focus groups with African American community members about their attitudes toward undergoing OSA screening, which then informed the development of a culturally grounded screening program that yielded some improvements in a pilot study.43 This traditional phase 2 study could be broadened by conducting a formative evaluation of multilevel factors as prescribed by various implementation frameworks. In fact, several other factors besides patient input were anecdotally mentioned in the OSA study that supported the adoption of better screening—for example, a supportive culture and implementation climate at the sleep center (CFIR’s inner setting). Systematically elucidating these other factors, for example via interviews with health center leaders and staff, could provide a more complete picture of what made that screening program successful, which could then be incorporated into future phase 3 disparities studies.

If a disparity has been documented (A), an effective clinical intervention exists (B) but is not reaching the vulnerable group (D), and reasons underlying the unequal implementation of the intervention are known (G), then the choice of design for the next study—potentially a Hybrid Type II, Type III, or pure implementation trial—is based on the strength of evidence for the intervention in the vulnerable population.

Back to Top | Article Outline

Hybrid Type II

If there is some evidence for the clinical intervention in the vulnerable group, but it still needs more testing in real-world settings (H), then a Hybrid Type II could be a good design choice. Studies using a Hybrid Type II design simultaneously assess the effects of a clinical intervention and an implementation strategy that has some evidence suggesting it would increase adoption of the intervention.

Consider as an example the domain of asthma, where African Americans have higher rates of emergency department visits, hospitalizations, and mortality compared with whites.44 A multilevel framework explicates several mechanisms underlying the disparity, including many CFIR domains such as the outer setting (state/federal regulations, insurance policy), inner setting (cultural sensitivity and diversity of health care workforce), and individual providers (biases, skill in working with racial minority populations).45 There are also several effective clinical interventions for managing asthma in racial minority populations (eg, patient education, specialty clinics, medications).16 For a Hybrid Type II study to reduce race disparities in asthma, one could bundle asthma clinical interventions that have the most evidence and test whether their uptake is improved when facilitated by implementation strategies with some evidence for improving asthma care among African Americans (eg, cultural tailoring the patient education, restructuring care settings to ensure coordinated care). To further bolster implementation, other possible strategies that could be added are bringing in outside assistance to support staff to implement an asthma plan focused on racial minority populations, identifying and developing champions, and monitoring rates of asthma care and outcomes and feeding that back to leadership for sustaining improvement.

A typical Hybrid Type II design could involve randomizing multiple sites that had evidence of asthma disparities to usual care or asthma management (eg, patient education, specialty clinics, medications) plus the above implementation strategies. Given the disparity focus, it would be critical to analyze, by racial subgroup, both patient outcomes (eg, symptoms, quality of life, hospitalizations) and implementation outcomes (eg, adoption, reach, and fidelity of asthma clinical interventions) to determine if the clinical intervention bundle combined with implementation strategies reduced asthma disparities.

Back to Top | Article Outline

Hybrid Type III

If there is stronger but not “strongest” evidence (eg, multiple clinical trials) for a clinical intervention in a vulnerable group (I), then a Hybrid Type III could be an appropriate design choice. Hybrid Type III designs primarily test how well an implementation strategy improves the use of a clinical intervention. Data collection in Hybrid Type III studies is focused on adoption and fidelity of the clinical intervention, with a secondary focus on patient-level health outcomes.

As an example, a Hybrid Type III study was conducted to reduce disparities in treatment for substance use and mental health disorders between individuals who are homeless versus those who are not.46 Homeless populations have substance abuse and mental health disorders at a much higher rate than the general public.47 Seamlessly combining substance use and mental health treatments is the most efficacious approach for managing both conditions.48 However, delivering a combined treatment has been especially difficult in real-world settings that serve homeless Veterans.49 A Hybrid Type III, randomized controlled trial was conducted to assess the impact of an implementation strategy called Getting To Outcomes (GTO) in helping homeless case management teams adopt an evidence-based clinical intervention called Maintaining Independence and Sobriety through Systems Integration, Outreach, and Networking-Veterans Edition (MISSION-Vet).50 All case management teams received standard training in MISSION-Vet (implementation as usual or IU). Half also received GTO, which included an external facilitator who helped teams develop plans for rolling out MISSION-Vet and provided ongoing feedback about the amount of MISSION-Vet services delivered. The trial focused on the adoption, reach, and fidelity of MISSION-Vet services delivered and MISSION-Vet implementation barriers and facilitators using CFIR-based qualitative interviews. Data were also collected on Veteran outcomes (mental health, substance use, community functioning, housing) from case manager reports.

No case managers in the IU group initiated MISSION-Vet, whereas 68% in the GTO group did. In the GTO group, the amount of services delivered was below the threshold established for MISSION-Vet. Outcome analyses showed that those receiving MISSION-Vet were more likely to engage in treatment services overall, and experienced small improvements in mental health and community functioning outcomes. Thus, GTO did have some impact on implementation, and CFIR identified several factors that impacted implementation that could be used in subsequent attempts to implement MISSION-Vet. Using those lessons, 1 site was preparing at the end of the study to restructure their team to better facilitate the use of MISSION-Vet. Implementation research such as this often yields information that, in an iterative fashion, can stimulate new approaches that could then be evaluated in subsequent research studies. Although this study focused on bringing an evidence-based treatment to an underserved, vulnerable population, its disparity focus could have been enhanced by comparing services and outcomes to individuals who are not homeless.

Back to Top | Article Outline

Pure Implementation Trials

These trials test different implementation strategies to improve the use of clinical interventions that already have the strongest evidence (J), and thus only measure implementation (eg, treatment delivery), not patient outcomes. As an example, consider a hypothetical trial testing implementation strategies to reduce a disparity in total knee replacement (TKR)—an evidence-based clinical intervention for those with advanced osteoarthritis.51 Despite TKR’s evidence, it is not utilized by racial minority groups at the same rate as whites.52 A multisite, pure implementation study could compare racial differences in TKR delivery in sites randomly assigned to practice as usual (no implementation strategy), an implementation strategy involving patient education (recently shown to have a small impact on patient’s willingness to undergo TKR53), or an implementation strategy involving education plus a more system-focused approach such as evaluation and data feedback (with an emphasis on reporting by racial groups) and technical assistance at the clinic level.

Back to Top | Article Outline

CAVEATS AND CHALLENGES

Our goal was to illustrate how implementation science frameworks and research designs could enhance health disparities research. Although we maintain that implementation science tools can be helpful in designing disparities research to address factors across multiple ecological levels that may contribute to disparities, we also recognize that implementation science is not a panacea for reducing disparities. For instance, although implementation science shines light on aspects of the health care system that are not often the focus of health disparities research, it can also leave out other important issues that are important for increasing uptake of an evidence-based practices and reducing health disparities but are outside the control of health care systems (eg, availability of healthy food, exercise space, poverty).11,12 Further, it should be noted that hybrid designs often entail challenging design choices and tradeoffs (eg, changing strategies during the trial) (Curran et al23). Also, implementation science is evolving. Although much of the above focus has been on adoption of an evidence-based clinical intervention as the final step, new implementation science work has been discussing the importance of having organizations continue to adapt interventions to the local context as a sign of true sustainability.54–56 Research of this ongoing adaptation process could be another avenue for disparity research. Finally, we recognize that not every disparity is a result of inadequate or inequitable health care delivery. Research on biological underpinnings of health disparities is needed to ensure treatment advancements are effective across all patient populations.

We also acknowledge several challenges to blending implementation science and disparities research. First, grant agencies must become more comfortable with funding implementation projects. From 2007 to 2014, National Institute of Health spent 0.09% on such projects through dedicated funding.57 Given the long time needed for research translation, spending more on studies with implementation science designs discussed here could help reduce disparities more quickly. Another barrier is the lack of training that implementation scientists and disparity researchers have in each other’s fields. Disparity researchers have recognized the importance of implementation training,9 although implementation science is not widely taught in many structured programs. There are, however, a wide range of implementation training institutes, conferences, fellowships, and online trainings for early career researchers or those from other fields (https://societyforimplementationresearchcollaboration.org/dissemination-and-implementation-training-opportunities/). Conversely, a separate track on “Promoting Health Equity and Eliminating Disparities” was included at the 2016 Dissemination and Implementation conference, which is open to all researchers interested in implementation science. Finally, we acknowledge that implementation science could also draw many lessons from disparity research. Although better implementation of a clinical intervention would theoretically improve care for vulnerable and nonvulnerable groups alike, that is an empirical question that could and should be tested more often in implementation studies. More cross-collaboration between implementation scientists and health disparities researchers would help to achieve the goal of incorporating implementation science into health disparities studies and vice versa.

Back to Top | Article Outline

CONCLUSIONS

Although equity may not have been an explicit stated goal of implementation science at the start, it is becoming more widely recognized as a critical issue. Disparities can be considered a “special case” of implementation challenges—when evidence-based clinical interventions are delivered to, and received by, vulnerable populations at lower rates than majority populations. Implementation science offers broad conceptual frameworks that highlight multilevel factors affecting implementation and hybrid study designs that allow for a simultaneous examination of a clinical intervention and implementation strategy. These designs offer rigorous methods that can test whether implementation strategies that attempt to reduce disparities should be spread—and can do so more quickly than the traditional efficacy-effectiveness-dissemination research pipeline. Bringing together health disparities research and implementation science could advance equity more effectively than either could achieve on their own.

Back to Top | Article Outline

REFERENCES

1. National Partnership for Action. National Stakeholder Strategy for Achieving Health Equity. Rockville, MD: US Department of Health & Human Services, Office of Minority Health; 2011.
2. Kilbourne AM, Switzer G, Hyman K, et al. Advancing health disparities research within the health care system: a conceptual framework. Am J Public Health. 2006;96:2113–2121.
3. Smedley BD, Stith AY, Nelson AR. Unequal Treatment: Confronting Racial and Ethnic Disparities in Health Care. Washington, DC: The National Acadamies Press; 2003.
4. Hutchinson RN, Shin S. Systematic review of health disparities for cardiovascular diseases and associated factors among American Indian and Alaska Native populations. PLoS One. 2014;9:e80973.
5. Saha S, Freeman M, Toure J, et al. Racial and ethnic disparities in the VA health care system: a systematic review. J Gen Intern Med. 2008;23:654–671.
6. Peek ME, Cargill A, Huang ES. Diabetes health disparities: a systematic review of health care interventions. Med Care Res Rev. 2007;64:101S–156S.
7. Chin MH, Walters AE, Cook SC, et al. Interventions to reduce racial and ethnic disparities in health care. Med Care Res Rev. 2007;64:7S–28S.
8. Chin MH, Clarke AR, Nocon RS, et al. A roadmap and best practices for organizations to reduce racial and ethnic disparities in health care. J Gen Intern Med. 2012;27:992–1000.
9. Alcaraz KI, Sly J, Ashing K, et al. The ConNECT framework: a model for advancing behavioral medicine science and practice to foster health equity. J Behav Med. 2017;40:23–38.
10. Chin MH, Goddu AP, Ferguson MJ, et al. Expanding and sustaining integrated health care-community efforts to reduce diabetes disparities. Health Promot Pract. 2014;15:29S–39S.
11. Paskett E, Thompson B, Ammerman AS, et al. Multilevel interventions to address health disparities show promise in improving population health. Health Aff (Millwood). 2016;35:1429–1434.
12. Purnell TS, Calhoun EA, Golden SH, et al. Achieving health equity: closing the gaps in health care disparities, interventions, and research. Health Aff (Millwood). 2016;35:1410–1415.
13. Glick SB, Clarke AR, Blanchard A, et al. Cervical cancer screening, diagnosis and treatment interventions for racial and ethnic minorities: a systematic review. J Gen Intern Med. 2012;27:1016–1032.
14. Hemmige V, McFadden R, Cook S, et al. HIV prevention interventions to reduce racial disparities in the United States: a systematic review. J Gen Intern Med. 2012;27:1047–1067.
15. Naylor K, Ward J, Polite BN. Interventions to improve care related to colorectal cancer among racial and ethnic minorities: a systematic review. J Gen Intern Med. 2012;27:1033–1046.
16. Press VG, Pappalardo AA, Conwell WD, et al. Interventions to improve outcomes for minority adults with asthma: a systematic review. J Gen Intern Med. 2012;27:1001–1015.
17. Sajid S, Kotwal AA, Dale W. Interventions to improve decision making and reduce racial and ethnic disparities in the management of prostate cancer: a systematic review. J Gen Intern Med. 2012;27:1068–1078.
18. Gorin SS, Badr H, Krebs P, et al. Multilevel interventions and racial/ethnic health disparities. J Natl Cancer Inst Monogr. 2012;2012:100–111.
19. Ortega AN, Albert SL, Sharif MZ, et al. Proyecto MercadoFRESCO: a multi-level, community-engaged corner store intervention in East Los Angeles and Boyle Heights. J Community Health. 2015;40:347–356.
20. Peek ME, Ferguson M, Bergeron N, et al. Integrated community-healthcare diabetes interventions to reduce disparities. Curr Diab Rep. 2014;14:467.
21. Eccles MP, Mittman BS. Welcome to implementation science. Implement Sci. 2006;1:1.
22. Bauer MS, Damschroder L, Hagedorn H, et al. An introduction to implementation science for the non-specialist. BMC Psychol. 2015;3:32.
23. Curran GM, Bauer M, Mittman B, et al. Effectiveness-implementation hybrid designs: combining elements of clinical effectiveness and implementation research to enhance public health impact. Med Care. 2012;50:217–226.
24. Proctor EK, Powell BJ, McMillen JC. Implementation strategies: recommendations for specifying and reporting. Implement Sci. 2013;8:139.
25. King M, Semlyen J, Tai SS, et al. A systematic review of mental disorder, suicide, and deliberate self harm in lesbian, gay and bisexual people. BMC Psychiatry. 2008;8:70.
26. Peterson K, McCleery E, Waldrip K. Evidence brief: update on prevalence of and interventions to reduce racial and ethnic disparities within the VA, D.o.V. Affairs, Editor. 2014. Department of Veterans Affairs.
27. Sabin JA, Riskind RG, Nosek BA. Health care providers’ implicit and explicit attitudes toward lesbian women and gay men. Am J Public Health. 2015;105:1831–1841.
28. Centers for Disease Control. Factors that contribute to health disparities in cancer. 2014. Available at: www.cdc.gov/cancer/healthdisparities/basic_info/challenges.htm. Accessed November 15, 2015.
29. Agency for Healthcare Research and Quality. National Healthcare Disparities Report (2003). Rockville, MD: Department of Health and Human Services; 2003.
30. Diez Roux AV. Conceptual approaches to the study of health disparities. Annu Rev Public Health. 2012;33:41–58.
31. Janz NK, Becker MH. The Health Belief Model: a decade later. Health Educ Q. 1984;11:1–47.
32. Vojta D, Koehler TB, Longjohn M, et al. A coordinated national model for diabetes prevention: linking health systems to an evidence-based community program. Am J Prev Med. 2013;44:S301–S306.
33. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53.
34. Damschroder LJ, Aron DC, Keith RE, et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.
35. Rogers EM. A prospective and retrospective look at the diffusion model. J Health Commun. 2004;9(suppl 1):13–19.
36. Powell BJ, Waltz TJ, Chinman MJ, et al. A refined compilation of implementation strategies: results from the Expert Recommendations for Implementing Change (ERIC) project. Implement Sci. 2015;10:21.
37. Powell BJ, Beidas RS, Lewis CC, et al. Methods to improve the selection and tailoring of implementation strategies. J Behav Health Serv Res. 2017;44:177–194.
38. Fixsen DL, Naoom SF, Blasé KA, et al. Implementation Research: A Synthesis of the Literature. Tampa, FL: University of South Florida, Louis de la Parte Florida Mental Health Institute; 2005.
39. Stetler CB, Legro MW, Wallace CM, et al. The role of formative evaluation in implementation research and the QUERI experience. J Gen Intern Med. 2006;21(suppl 2):S1–S8.
40. Humphreys K, Maisel NC, Blodgett JC, et al. Representativeness of patients enrolled in influential clinical trials: a comparison of substance dependence with other medical disorders. J Stud Alcohol Drugs. 2013;74:889–893.
41. Humphreys K, Weingardt KR, Harris AH. Influence of subject eligibility criteria on compliance with National Institutes of Health guidelines for inclusion of women, minorities, and children in treatment research. Alcohol Clin Exp Res. 2007;31:988–995.
42. Zulman DM, Sussman JB, Chen X, et al. Examining the evidence: a systematic review of the inclusion and analysis of older adults in randomized controlled trials. J Gen Intern Med. 2011;26:783–790.
43. Williams NJ, Jean-Louis G, Ravenell J, et al. A community-oriented framework to increase screening and treatment of obstructive sleep apnea among blacks. Sleep Med. 2016;18:82–87.
44. Carnethon MR, De Chavez PJ, Zee PC, et al. Disparities in sleep characteristics by race/ethnicity in a population-based sample: Chicago Area Sleep Study. Sleep Med. 2016;18:50–55.
45. Canino G, McQuaid EL, Rand CS. Addressing asthma health disparities: a multilevel challenge. J Allergy Clin Immunol. 2009;123:1209–1217. Quiz 1218–1209.
46. Substance Abuse and Mental Health Services Agency. Behavioral Health Services forPeople Who Are Homeless. Treatment Improvement Protocol (TIP) Series 55. Rockville, MD: Substance Abuse and Mental Health Services Administration; 2013.
47. Henry M, Shivji A, de Sousa T, et al. The 2015 Annual Homeless Assessment Report (AHAR) to Congress. Washington, DC: The US Department of Housing and Urban Development; 2015.
48. Drake RE, Mueser KT, Brunette MF, et al. A review of treatments for people with severe mental illnesses and co-occurring substance use disorders. Psychiatr Rehabil J. 2004;27:360–374.
49. Nelson G, Stefancic A, Rae J, et al. Early implementation evaluation of a multi-site housing first intervention for homeless people with mental illness: a mixed methods approach. Eval Program Plann. 2014;43:16–26.
50. Chinman M, McCarthy S, Hannah G, et al. Using Getting To Outcomes to facilitate the use of an evidence-based practice in VA homeless programs: a cluster-randomized trial of an implementation support strategy. Implement Sci. 2017;12:34.
51. Kane RL, Saleh KJ, Wilt TJ, et al. Total Knee Replacement. Evidence Report/Technology Assessment No. 86 (Prepared by the Minnesota Evidence-based Practice Center, Minneapolis, MN). AHRQ Publication No. 04-E006-2. Rockville, MD: Agency for Healthcare Research and Quality; 2003.
52. Centers for Disease, C and Prevention. Racial disparities in total knee replacement among Medicare enrollees—United States, 2000-2006. MMWR Morb Mortal Wkly Rep. 2009;58:133–138.
53. Ibrahim SA, Hanusa BH, Hannon MJ, et al. Willingness and access to joint replacement among African American patients with knee osteoarthritis: a randomized, controlled intervention. Arthritis Rheum. 2013;65:1253–1261.
54. Brownson RC, Jacobs JA, Tabak RG, et al. Designing for dissemination among public health researchers: findings from a national survey in the United States. Am J Public Health. 2013;103:1693–1699.
55. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainment amid ongoing change. Implement Sci. 2013;8:117.
56. Chambers DA, Norton WE. The adaptome: advancing the science of intervention adaptation. Am J Prev Med. 2016;51:S124–S131.
57. Purtle J, Peters R, Brownson RC. A review of policy dissemination and implementation research funded by the National Institutes of Health, 2007–2014. Implement Sci. 2016;11:1.
Keywords:

implementation science; health care disparities; health disparities

Copyright © 2017 Wolters Kluwer Health, Inc. All rights reserved.