Secondary Logo

Journal Logo

Assessing Organizational Supports for Evidence-Based Decision Making in Local Public Health Departments in the United States

Development and Psychometric Properties of a New Measure

Mazzucca, Stephanie PhD; Parks, Renee G. MS; Tabak, Rachel G. PhD, RD; Allen, Peg PhD, MPH; Dobbins, Maureen PhD, RN; Stamatakis, Katherine A. PhD, MPH; Brownson, Ross C. PhD

Journal of Public Health Management and Practice: September/October 2019 - Volume 25 - Issue 5 - p 454–463
doi: 10.1097/PHH.0000000000000952
Research Reports: Research Full Report
Open
SDC

Context: Fostering evidence-based decision making (EBDM) within local public health departments and among local health department (LHD) practitioners is crucial for the successful translation of research into public health practice to prevent and control chronic disease.

Objective: The purpose of this study was to identify organizational supports for EBDM within LHDs and determine psychometric properties of a measure of organizational supports for EBDM in LHDs.

Design: Cross-sectional, observation study.

Setting: Local public health departments in the United States.

Participants: Local health department practitioners (N = 376) across the United States participated in the study.

Main Outcome Measures: Local health department practitioners completed a survey containing 27 items about organizational supports for EBDM. Most items were adapted from previously developed surveys, and input from researchers and practitioners guided survey development. Confirmatory factor analysis was used to test and refine the psychometric properties of the measure.

Results: The final solution included 6 factors of 22 items: awareness of EBDM (3 items), capacity for EBDM (7 items), resources availability (3 items), evaluation capacity (3 items), EBDM climate cultivation (3 items), and partnerships to support EBDM (3 items). This factor solution achieved acceptable fit (eg, Comparative Fit Index = 0.965). Logistic regression models showed positive relationships between the 6 factors and the number of evidence-based interventions delivered.

Conclusions: This study identified important organizational supports for EBDM within LHDs. Results of this study can be used to understand and enhance organizational processes and structures to support EBDM to improve LHD performance and population health. Strong measures are important for understanding how LHDs support EBDM, evaluating interventions to improve LHD capacity, and to guide programmatic and policy efforts within LHDs.

Prevention Research Center in St Louis, Brown School, Washington University in St Louis, St Louis, Missouri (Drs Mazzucca, Tabak, Allen, and Brownson and Ms Parks); National Collaborating Centre for Methods and Tools and Health Evidence, McMaster University, Ontario, Canada (Dr Dobbins); Department of Epidemiology, College for Public Health & Social Justice, Saint Louis University, St Louis, Missouri (Dr Stamatakis); and Department of Surgery (Division of Public Health Sciences) and Alvin J. Siteman Cancer Center, Washington University School of Medicine, Washington University in St Louis, St Louis, Missouri (Dr Brownson).

Correspondence: Stephanie Mazzucca, PhD, Prevention Research Center in St Louis, Brown School, Washington University in St Louis, One Brookings Dr, Campus Box 1196, St Louis, MO 63130 (smazzucca@wustl.edu).

The authors appreciate the LHD practitioners for their participation in the national survey. They thank Dr Derek Hales for critical review of the analyses and statistical support. The authors thank their collaborators Diane Weber, executive director for Missouri Association for Local Public Health Agencies (MoALPHA), and the National Association of County & City Health Officials (NACCHO). With gratitude, they acknowledge the administrative support of Linda Dix, Mary Adams, and Cheryl Valko at the Prevention Research Center in St Louis, Brown School, Washington University in St Louis.

The authors' contributions are as follows. Conceptualization and design: S.M., R.C.B., and K.A.S.; survey instrument development: R.C.B., R.G.T., P.A., K.A.S., and M.D.; statistical analysis: S.M.; review of analyses: S.M., R.C.B., R.G.P., R.G.T., P.A., K.A.S., and M.D.; writing: S.M., R.G.P., and R.C.B.; and manuscript content revisions: R.C.B., R.G.T., P.A., M.D., and K.A.S. All authors read and approved the final manuscript.

This study is funded by the National Institute of Diabetes and Digestive and Kidney Diseases of the National Institutes of Health under award numbers 5R01DK109913, 1P30DK092950, and P30DK020579. The findings and conclusions in this article are those of the authors and do not necessarily represent the official positions of the National Institutes of Health.

The authors declare that they have no competing interests.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Web site (http://www.JPHMP.com).

This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

Local health departments (LHDs) in the United States are critical to public health efforts focused on reducing the significant burden of chronic diseases and are responsible for implementing interventions to benefit the population's health. The 2800 LHDs across the country are well suited to support chronic disease reduction and prevention because they have a deep understanding of the local needs, context, and available resources within their communities.1,2 Local health department staff frequently deliver interventions directly to the community but also deliver them in collaboration with community partners using LHD staff and partner organization staff and volunteers. These partners work across health care (eg, hospitals or health care providers), nonprofit (eg, churches), government sectors (eg, parks and recreation departments), and private sectors (eg, worksites).3–5 Organizations such as the Institute of Medicine (now the National Academy of Medicine) have called for practitioners' efforts to be focused on implementing evidence-based interventions (EBIs) in their communities,6 which are defined broadly as programs, practices, processes, policies, and guidelines proven to be efficacious or effective.7 Resources have been developed to support the choice and implementation of EBIs (eg, the Community Guide8). However, there is a gap between the dissemination of EBIs and their implementation into public health practice,9 and efforts are needed to improve the uptake of EBIs.

Evidence-based public health is an approach for improving population health that integrates research-tested interventions with community preferences10,11 that can be used by LHD practitioners to shrink the gap between research and practice.9,11,12 In a public health context, evidence refers to some type of data (eg, quantitative epidemiologic data, results from program or policy evaluations, and qualitative data) that are used to identify a problem, what should be done about the problem, how to implement the solution, and how to evaluate progress.10,13 A key piece of the evidence-based public health framework is evidence-based decision making (EBDM), defined as the process by which organizations choose and implement an EBI.14 Evidence-based decision making is characterized by several components: reviewing the best available peer-reviewed evidence, using data and information systems, applying program planning frameworks, engaging the community in assessment and decision making, conducting sound evaluation, disseminating findings to key stakeholders and decision makers, and synthesizing scientific and communication skills with common sense and political acumen as decisions are made.14 Many of these EBDM components are featured in national accreditation standards, illustrating their importance in the functioning and performance of a public health department.15 In addition, EBDM is closely aligned with the field of dissemination and implementation science, which aims to understand what processes and factors are associated with widespread use of an EBI and how EBIs are successfully integrated into usual practice in different settings (eg, community health clinics, LHDs).7

For EBDM to occur successfully, individual practitioners must have the required skills and abilities, for example, knowledge of EBIs or evaluation principles,16,17 which can be improved through training and capacity building at the individual practitioner level.18 Also, the organizations in which the practitioners work must be supportive of EBDM. Prior research has shown that organizations supportive of EBDM, for example, by dedicating financial resources for EBDM, demonstrate higher rates of EBI implementation and have higher agency performance (ie, the ability to carry out the 10 essential public health services, measured with setting-specific assessment instruments).19,20 Modifying organizational processes and capacity-building training efforts has the potential to promote uptake of EBDM and delivery of EBIs and improve agency performance.21–25 High-quality measures, such as those that are developed according to a theoretical model and empirically tested, are essential to understanding factors related to EBDM and efforts to implement EBDM in public health settings.26,27

Previously developed measures have focused on individual skills related to EBDM28 and organizational supports for EBDM within state health departments (SHDs).29 No measures for organizational supports have been rigorously validated for use in LHDs; 1 existing measure for use in LHDs has limited reliability evidence only.30 The nature of EBDM is likely to be different at the local versus the state level, thus organizational supports for EBDM may operate differently at LHDs.31–33 For example, the way that partnerships influence EBDM may be different at the local versus the state level, since LHD practitioners may act in more of an ongoing, collaborative nature with partners than SHD practitioners who direct funding to partners for evidence-based public health efforts. In addition, differences noted in the educational background of LHD practitioners, with fewer trained in public health compared with those in SHDs,32,33 may necessitate different organizational supports for EBDM. As such, the purpose of this study was to identify important organizational supports for EBDM based on a theoretically driven framework within LHDs and evaluate the psychometric properties of a measure of these organizational supports for EBDM, including relationships between organizational supports and delivery of EBIs, that can be used by LHDs across the United States. Public health practitioners and researchers could use this measure to guide the development and evaluation of efforts to increase individual and organizational capacity for EBDM within LHDs.

Back to Top | Article Outline

Methods

This cross-sectional study used data from an online survey completed by LHD practitioners in the United States. The survey was part of a larger study to improve evidence-based diabetes management and chronic disease prevention and control within LHDs.34 The study was reviewed and approved by the Institutional Review Board (IRB no. 201705026) of Washington University in St Louis.

Back to Top | Article Outline

Participant recruitment

Eligible LHDs were those that reported implementing either diabetes or body mass index screening or population-based nutrition or physical activity efforts in the 2016 National Association of County & City Health Officials (NACCHO) National Profile. Of those 1677 LHDs, 200 LHDs were randomly sampled from each of 3 jurisdiction population size categories (small: <50 000, medium: 50 000-199 999, and large: ≥200 000). A stratified sampling frame was used so that there would be adequate representation of medium and large LHDs, which make up about 27% and 16% of all LHDs, respectively, of the LHDs in the 2016 NACCHO National Profile. The lead practitioner working in chronic disease control at the LHD was invited to participate. After excluding nonvalid e-mail addresses, the final recruitment sample was 579.

Back to Top | Article Outline

Data collection

Data were collected with Qualtrics online survey software. Preinvitation e-mails were sent to the participants to inform them of the study purpose, and invitation e-mails with the study information and survey link were sent 1 week later. Those who had not completed the survey received up to 3 reminder e-mails and 2 phone calls over a 6-week period to encourage participation. The 376 (65% of invited sample) respondents were offered a $20 Amazon.com gift card for completing the survey.

Back to Top | Article Outline

Measures

Survey development was guided by a theoretical understanding of public health departments and built on prior studies. These studies reviewed administrative evidence-based practices in SHDs and LHDs,19 assessed barriers to EBDM25 and stages of organizational readiness for implementing EBIs in community chronic disease prevention settings,35 and developed measures of administrative evidence-based practices in LHDs30 and organizational supports for EBDM in SHDs.29 Other items were taken from instruments identified by the project team through snowball sampling.36,37 The survey development process has been detailed elsewhere,34 and full text of the survey items and response options used in this analysis is available in Supplemental Digital Content Appendix 1, available at http://links.lww.com/JPHMP/A559. Survey items were taken from prior surveys developed and used by the project team.19,25,29,30,35 Broadly, questions on the survey assessed use of EBIs, skills related to EBDM, and organizational supports for EBDM within LHDs. In addition to 3 rounds of input, cognitive response testing interviews with 10 practitioners similar to those in the target audience and an assessment of test-retest reliability were conducted.

Items assessing organizational support factors related to EBDM were grouped into 6 categories on the survey, as shown in Supplemental Digital Content Appendix 1, available at http://links.lww.com/JPHMP/A559: awareness of EBDM (4 items), use of EBDM (7 items), resources available for maintaining EBDM (3 items), EBDM climate cultivation (4 items), evaluation capacity (5 items), and partnerships to support EBDM (4 items). The respondents were asked to indicate how much they agreed with each item on a 7-point Likert scale (1 = strongly disagree to 7 = strongly agree). The respondents reported characteristics about their LHD (eg, jurisdiction population size, current status in Public Health Accreditation Board [PHAB] accreditation efforts) and themselves (eg, age group, years in current position, Table 1). To quantify the number of EBIs offered by the LHD, the respondents were shown a list of 4 EBIs in 1 of 5 categories depending on the program area in which they worked (ie, diabetes, nutrition, physical activity, obesity, tobacco). Evidence-based interventions were taken from those identified in The Community Guide8 and What Works for Health38 (eg, Diabetes Prevention Program; worksite programs, policies, or environmental changes to support nutrition/healthy food and physical activity; and reminders for clinic health care providers to discuss tobacco/nicotine cessation with clients). During cognitive response testing, listed EBIs were reviewed by LHD practitioners to confirm that they were the most relevant set of EBIs for each program area. The respondents who reported working in a single program area were given interventions for that program area. Those who reported working in multiple program areas received the diabetes interventions if diabetes was selected as one of their program areas. If a respondent worked in more than one of these areas outside of diabetes, they received a randomly assigned set of interventions for one of their program areas.

TABLE 1

TABLE 1

Back to Top | Article Outline

Statistical analysis

Evidence-based decision making item means, standard deviations, and correlations of items with each other were calculated. A confirmatory factor analysis (CFA) was conducted to confirm the validity of the 6 factors and identify the most parsimonious (ie, simplest) and theoretically sound model. The analytic process was guided by Schumacker and Lomax39 and was performed in MPlus version 8.40 The base model was specified with 6 factors and all 27 items were included in the survey using a robust weighted least squares estimator. Items considered for removal were those that cross-loaded onto other factors on the basis of modification indices, for example, when the highest modification indices included 1 item and factors in which the item was not originally placed. In addition, items that were highly correlated with another item (>0.7) were considered for removal; in this case, the item that had the stronger factor loading was retained. Covariance terms were added on the basis of the modification indices given by MPlus. Several fit indices were used to evaluate model fit: the χ2/df, comparative fit index (CFI), Tucker-Lewis index, and root-mean-square error of approximation (RMSEA) and 90% confidence interval. The CFI values of 0.90 and greater and 0.95 and greater indicate adequate or good fit, respectively, and RMSEA values less than 0.06 or 0.08 indicate good and adequate model fit.41 Correlations between factors were also examined. Factors with correlation coefficients of 0.85 and greater were deemed strongly related.42

Once a final factor structure was identified, standardized factor scores were obtained from MPlus. To examine construct validity of the factor structure, logistic regression models were fit to quantify the associations between continuous EBDM factor scores (independent variables) and delivery of EBIs in SAS version 9.4 (Cary, North Carolina). The dependent variable, number of EBIs delivered of the 4 presented to a respondent, was categorized into 2 levels: 0 to 2 (referent) versus 3 to 4. Odds ratios and 95% confidence intervals were calculated. Several characteristics were identified as potential confounders: jurisdiction size population, PHAB accreditation status, presence of an academic health department partnership, and the respondent's experience in public health. None of these covariates changed the point estimates of the association between the factor scores and EBI delivery or were associated with EBI delivery except PHAB accreditation. Thus, models presented are adjusted for PHAB accreditation status.

Back to Top | Article Outline

Results

The majority of the 376 LHD practitioners were between 40 and 59 years of age (58%), were female (83%), and had been in public health for 10 or more years (72%, Table 1). While most practitioners held a master's degree or higher (58%), only one-third (32%) of all participants held a public health master's or doctoral degree. Most LHDs reported participation in an academic health department partnership (73%), and nearly one-third (30%) were accredited by the PHAB. Comparing LHDs of those who responded to the survey (n = 376) with nonresponders (n = 206), jurisdiction population sizes were similar and similar proportions were in rural jurisdictions. A higher proportion of respondents were from LHDs that were PHAB-accredited, were locally governed, had a local board of health, and used the Community Guide in some areas or consistently across program areas (data not shown).

A series of structural equation models were fit to conduct the CFA of 6 factors according to the categories of items on the survey (Table 2). The base model had poor fit according to all indices (χ2 = 1355, RMSEA = 0.096, CFI = 0.921, Table 3). Based on suggested modifications provided by MPlus (ie, items with high modification indices), 5 subsequent modification models were fit. In these models, individual items were removed because of cross-loading onto multiple factors or covariance terms were added between individual items that were related. Details of individual modifications are provided in Supplemental Digital Content Appendix 2, available at http://links.lww.com/JPHMP/A560. The final measure had good fit (χ2 = 569, RMSEA = 0.073, CFI = 0.965) and comprised 6 scales: awareness of EBDM (3 items), capacity for EBDM (7 items), resources availability (3 items), evaluation capacity (3 items), EBDM climate cultivation (3 items), and partnerships to support EBDM (3 items), with a total of 22 items.

TABLE 2

TABLE 2

TABLE 3

TABLE 3

Factor loadings and cross-factor correlations for the final 6-factor model solution are presented in Table 4. Most items (20 of 22) had high factor loadings of greater than 0.7, with the factor loadings for the remaining 2 items greater than 0.6. This indicates that items fit well on their respective scales; low factor loadings would suggest that an item is out of place on a given factor. Two factors, awareness of EBDM (factor 1) and capacity for EBDM (factor 2), had a markedly higher correlation (r = 0.91) than the other pairs of factors. The lowest cross-factor correlation (r = 0.40) was noted between resource availability (factor 3) and partnerships to support EBDM (factor 6). All other correlations ranged from 0.46 to 0.77.

TABLE 4

TABLE 4

Logistic regression models showed positive relationships between the 6 factors and the number of EBIs delivered (Table 5). Overall, these relationships were similar in strength across the 6 factors (odds ratios ranged from 1.31 to 1.52). The strongest relationship was found for resource availability (factor 3) and number of EBIs delivered, while the weakest relationship occurred between partnerships to support EBDM (factor 6) and number of EBIs delivered, which did not reach statistical significance.

TABLE 5

TABLE 5

Back to Top | Article Outline

Discussion

The purpose of this study was to develop a measure of organizational support for EBDM and to assess the psychometric properties of the measure. Results from the CFA show that the 6-factor model had good fit and that there is strong evidence of construct validity based on the relationships between the factors and delivery of EBIs. This measure can be used by public health practitioners and researchers while planning for, implementing, and evaluating efforts to increase individual and organizational capacity for EBDM within LHDs. For example, if an LHD completed the survey and scored lower on evaluation capacity than other factors, they could seek opportunities for quality improvement focused on aspects of evaluation capacity (eg, planning for evaluation before implementing an EBI). Using the instrument to evaluate changes in organizational capacity would show whether or not their efforts were successful in improving the LHD's evaluation capacity.

This study extends previous work conducted to understand factors related to EBDM within SHD practitioners by Stamatakis and colleagues.29 There are notable differences between SHD and LHD structures and the practitioners within each setting that may need to be accounted for differently in measures. For example, LHD practitioners have backgrounds that are more heterogeneous and are less likely to have formal public health training.32,33 Also, public health governance structures and the relationship between state health and regional or local health departments differ widely across states, which could influence how much autonomy LHDs have to modify their organization's EBDM supports or perhaps the level of support LHDs have been provided by the state to engage in EBDM.31 These differences could impact the way that EBDM operates within an LHD, what organizational supports are needed, how these LHD practitioners support and ensure fidelity of interventions implemented by community lay workers or contract with agencies to do so, and ultimately how EBIs are implemented. The need for a specific LHD measure is also highlighted by differences in the structure of the state versus local assessment. For example, the same leadership item grouped with leadership support and commitment in the SHD survey and items related to capacity for EBDM in the LHD sample. This study also builds on work by Reis and colleagues30 to develop a measure for LHDs, which was tested using a smaller sample (n = 90) to establish initial internal consistency (ie, Cronbach α) and test-retest reliability evidence. Building upon these 2 studies, this study was designed to understand the supports for EBDM in the specific context of LHDs using a larger sample and rigorous evaluation methods (eg, CFA) to develop a survey that incorporates our most up-to-date understanding of EBDM and establish construct validity of the survey (ie, relationships between factors and EBI delivery).

The organizational supports for EBDM identified in this study are in line with other factors identified in prior literature. Items in our factors related to evaluation capacity, access to evidence, resource availability, and organizational culture align with important characteristics of organizations identified by studies led by Allen et al and Kramer et al.43–45 In addition, Hu and colleagues46 reported increases in the likelihood of using research evidence with more favorable profiles of organizational supports in a longitudinal study of SHDs, with a particular emphasis on the impact of leadership support. Peirson and colleagues47 found that characteristics of leaders and access to and resources for using evidence are important in building capacity for evidence-informed decision making in Canadian public health units. Evidence-informed decision making is a term used in Canada and Australia to describe a process similar to EBDM while highlighting that public health decisions are based on evidence and real-world context (eg, organizational and political factors).7 Dobbins and colleagues48 demonstrated that an organizational culture supportive of evidence-informed decision making modifies the response of public health agencies to knowledge translation and exchange interventions. Capacity-building efforts should consider these differences and possibly tailor strategies on the basis of an agency's ability to support EBDM.

Several limitations should be considered in light of the findings of this study. Survey items were part of a self-report survey of LHD practitioners, which may not fully reflect the organizational attributes of an LHD. Response bias may influence the generalizability of our findings, as a higher proportion of PHAB-accredited LHDs were present in our sample compared with nonrespondents. In addition, our sampling methods may limit how generalizable the sample is to all LHDs in the 2016 NACCHO profile from which our sample was drawn. A lower proportion of LHDs in our sample were from a rural jurisdiction and had a state-governed structure, and a higher proportion of respondents were PHAB accredited and were locally governed compared with other LHDs in the NACCHO profile (data not shown). The difference in representation from rural LHDs likely resulted from our sampling strategy that sampled equal numbers of small, medium, and large LHDs, thereby oversampling larger LHDs. Evidence-based decision making may operate differently in the LHDs in our sample compared with nonrespondents and with other LHDs around the United States. In addition, the high correlation between the awareness and capacity factors (r = 0.91) indicates that these may be representing the same latent factor. While our results suggest a relationship between the organizational supports for EBDM and delivery of EBIs, future studies should assess the construct validity of these factors by investigating relationships with other types of EBIs (eg, colorectal cancer screening) or whether changes in organizational supports can improve LHD performance and EBI delivery.

Despite these limitations, our study is strengthened by the theoretical development and empirical testing of the instrument that allowed us to build upon prior research and knowledge of important organizational supports for EBDM. In addition, LHD practitioners in our sample represent a variety of LHDs across the country (ie, sampled from across the United States and from different jurisdiction sizes). The factors identified are potentially modifiable and could be incorporated into public health and research efforts to improve EBDM within LHDs. Currently, there are few strategies for modifying organizational supports for EBDM with demonstrated effectiveness. Brownson et al24 used EBDM training and a supplemental technical assistance approach to improve EBDM within SHDs and found improvements on only 1 of 5 organizational factors (ie, access to evidence and skilled staff). Changing organizational-level factors is made more challenging due to staff turnover, competing priorities, and a lack of incentive to institute changes.49 While it will require a significant, long-term commitment from LHD leaders,50 building organizational capacity for EBDM is crucial for health departments to fulfill their role in population-level chronic disease control. Future work should investigate what is needed to make meaningful changes to an organization's ability to support EBDM.

Back to Top | Article Outline

Implications for Policy & Practice

  • This study adds to the growing body of literature on measuring and promoting EBDM within public health settings so that evidence-based programs and policies can be most efficiently and effectively translated into practice.
  • Measures with sound psychometric properties are critical to understanding how public health departments support EBDM, evaluating interventions aimed at improving the capacity of LHDs to support EBDM, and guiding the development of evidence-based policies to support EBDM within LHDs.
  • These efforts can enhance the ability to translate research into public health practice effectively, the overall performance of LHDs, and eventually the health of the populations they serve.
Back to Top | Article Outline

References

1. National Profile of Local Health Departments. Chapter 7-Programs and Services (article online). http://nacchoprofilestudy.org/chapter-7/. Published 2017. Accessed April 11, 2018.
2. National Association of County & City Health Officials. Local Health Departments Protect the Public's Health. Washington, DC: National Association of County & City Health Officials; 2014.
3. Mays G. Organization of the public health delivery system. In: Novick L, Morrow C, Mays G, eds. Public Health Administration. Principles for Population-Based Management. 2nd ed. Sudbury, MA: Jones and Bartlett; 2008:69–126.
4. Mays GP, Mamaril CB, Timsina LR. Preventable death rates fell where communities expanded population health activities through multisector networks. Health Aff (Millwood). 2016;35(11):2005–2013.
5. Mays GP, Scutchfield FD. Improving public health system performance through multiorganizational partnerships. Prev Chronic Dis. 2010;7(6):A116.
6. Institute of Medicine. The Future of the Public's Health in the 21st Century. Washington, DC: The National Academies Press; 2003.
7. Rabin BA, Brownson RC. Terminology for dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK, eds. Dissemination and Implementation Research in Health: Translating Science to Practice. New York, NY: Oxford University Press; 2017:19–45.
8. US Preventive Services Task Force. The Guide to Community Preventive Services (The Community Guide). Atlanta, GA: Centers for Disease Control and Prevention (CDC). https://www.thecommunityguide.org/. Published 2017. Accessed December 22, 2017.
9. Brownson RC, Fielding JE, Maylahn CM. Evidence-based public health: a fundamental concept for public health practice. Annu Rev Public Health. 2009;30:175–201.
10. Brownson RC, Baker EA, Deshpande AD, Gillespie KN. Evidence-Based Public Health. New York, NY: Oxford University Press; 2017.
11. Kohatsu ND, Robinson JG, Torner JC. Evidence-based public health: an evolving concept. Am J Prev Med. 2004;27(5):417–421.
12. Baker EA, Brownson RC, Dreisinger M, McIntosh LD, Karamehic-Muratovic A. Examining the role of training in evidence-based public health: a qualitative study. Health Promot Pract. 2009;10(3):342–348.
13. Chambers D, Kerner J. Closing the gap between discovery and delivery. Dissemination and Implementation Research Workshop: Harnessing Science to Maximize Health. Rockville, MD: National Institutes of Health; 2007.
14. Brownson RC, Fielding JE, Maylahn CM. Evidence-based decision making to improve public health practice. Front Public Health Serv Syst Res. 2013;2(2):2.
15. Public Health Accreditation Board. Public Health Accreditation Board Standards and Measures, Version 1.5. Alexandria, VA: Public Health Accreditation Board; 2013. http://www.phaboard.org/wp-content/uploads/SM-Version-1.5-Board-adopted-FINAL-01-24-2014.docx.pdf. Accessed August 25, 2015.
16. Brownson RC, Ballew P, Kittur ND, et al Developing competencies for training practitioners in evidence-based cancer control. J Cancer Educ. 2009;24(3):186–193.
17. Luck J, Yoon J, Bernell S, et al The Oregon Public Health Policy Institute: building competencies for public health practice. Am J Public Health. 2015;105(8):1537–1543.
18. Jacobs JA, Duggan K, Erwin P, et al Capacity building for evidence-based decision making in local health departments: scaling up an effective training approach. Implement Sci. 2014;9:124.
19. Brownson RC, Allen P, Duggan K, Stamatakis KA, Erwin PC. Fostering more-effective public health by identifying administrative evidence-based practices: a review of the literature. Am J Prev Med. 2012;43(3):309–319.
20. Centers for Disease Control and Prevention. National Public Health Performance Standards. Atlanta, GA: Centers for Disease Control and Prevention. https://www.cdc.gov/stltpublichealth/nphps/index.html. Published 2018. Accessed October 22, 2018.
21. Dodson EA, Baker EA, Brownson RC. Use of evidence-based interventions in state health departments: a qualitative assessment of barriers and solutions. J Public Health Manag Pract. 2010;16(6):E9–E15.
22. Jacobs JA, Jones E, Gabella BA, Spring B, Brownson RC. Tools for implementing an evidence-based approach in public health practice. Prev Chronic Dis. 2012;9:E116.
23. Maylahn C, Fleming D, Birkhead G. Health departments in a brave New World. Prev Chronic Dis. 2013;10:E41.
24. Brownson RC, Allen P, Jacob RR, et al Controlling chronic diseases through evidence-based decision making: a group-randomized trial. Prev Chronic Dis. 2017;14:E121.
25. Jacobs JA, Dodson EA, Baker EA, Deshpande AD, Brownson RC. Barriers to evidence-based decision making in public health: a national survey of chronic disease practitioners. Public Health Rep. 2010;125(5):736–742.
26. Lewis CC, Proctor EK, Brownson RB. Measurement issues in dissemination and implementation research. In: Brownson RC, Colditz GA, Proctor EK, eds. Dissemination and Implementation Research in Health: Translating Science to Practice. 2nd ed. New York, NY: Oxford University Press; 2017.
27. Lobb R, Colditz GA. Implementation science and its application to population health. Annu Rev Public Health. 2013;34:235–251.
28. Jacobs JA, Clayton PF, Dove C, et al A survey tool for measuring evidence-based decision making capacity in public health agencies. BMC Health Serv Res. 2012;12:57.
29. Stamatakis KA, Ferreira Hino AA, Allen P, et al Results from a psychometric assessment of a new tool for measuring evidence-based decision making in public health organizations. Eval Program Plann. 2017;60:17–23.
30. Reis R, Duggan K, Allen P, Stamatakis K, Erwin P, Brownson R. Developing a tool to assess administrative evidence-based practices in local health departments. Frontiers PHSSR. 2014;3(3): Article 2. doi: 10.13023/FPHSSR.0303.02.
31. Brownson RC, Reis RS, Allen P, et al Understanding administrative evidence-based practices findings from a survey of local health department leaders. Am J Prev Med. 2014;46(1):49–57.
32. Erwin PC, Harris JK, Smith C, Leep CJ, Duggan K, Brownson RC. Evidence-based public health practice among program managers in local public health departments. J Public Health Manag Pract. 2014;20(5):472–480.
33. Eyler AA, Valko C, Ramadas R, Macchi M, Fershteyn Z, Brownson RC. Administrative evidence-based practices in state chronic disease practitioners. Am J Prev Med. 2018;54(2):275–283.
34. Parks RG, Tabak RG, Allen P, et al Enhancing evidence-based diabetes and chronic disease control among local health departments: a multi-phase dissemination study with a stepped-wedge cluster randomized trial component. Implement Sci. 2017;12(1):22.
35. Stamatakis KA, McQueen A, Filler C, et al Measurement properties of a novel survey to assess stages of organizational readiness for evidence-based interventions in community chronic disease prevention settings. Implement Sci. 2012;7:65.
36. Erwin PC, Barlow P, Brownson RC, Amos K, Keck CW. Characteristics of academic health departments: initial findings from a cross-sectional survey. J Public Health Manag Pract. 2016;22(2):190–193.
37. Erwin PC, Harris J, Wong R, Plepys CM, Brownson RC. The academic health department: academic-practice partnerships among accredited U.S. schools and programs of public health, 2015. Public Health Rep. 2016;131(4):630–636.
38. University of Wisconsin Population Health Institute, School of Medicine and Public Health. What Works for health: policies and programs to improve Wisconsin's health. http://whatworksforhealth.wisc.edu/topic.php?id=21. Accessed October 22, 2018.
39. Schumacker RE, Lomax RG. A Beginner's Guide to Structural Equation Modeling. New York, NY: Routledge Academic; 2012.
40. Muthén LK, Muthén BO. Mplus User's Guide. 8th ed. Los Angeles, CA: Muthén & Muthén; 1998-2017.
41. Hu LT, Bentler PM. Cutoff criteria for fit indexes in covariance structure analysis: conventional criteria versus new alternatives. Struct Equation Model Multidiscip J. 1999;6(1):1–55.
42. Brown TA. Confirmatory Factor Analysis for Applied Research. New York, NY: Guilford Publications; 2014.
43. Allen P, Sequeira S, Jacob RR, et al Promoting state health department evidence-based cancer and chronic disease prevention: a multi-phase dissemination study with a cluster randomized trial component. Implement Sci. 2013;8:141.
44. Kramer DM, Cole DC. Sustained, intensive engagement to promote health and safety knowledge transfer to and utilization by workplaces. Sci Commun. 2003;25(1):56–82.
45. Kramer DM, Wells RP, Carlan N, et al Did you have an impact? A theory-based method for planning and evaluating knowledge-transfer and exchange activities in occupational health and safety. Int J Occup Saf Ergon. 2013;19(1):41–62.
46. Hu H, Allen P, Yan Y, Reis RS, Jacob RR, Brownson RC. Organizational supports for research evidence use in state public health agencies: a latent class analysis [published online ahead of print May 30, 2018]. J Public Health Manag Pract. doi: 10.1097/PHH.0000000000000821.
47. Peirson L, Ciliska D, Dobbins M, Mowat D. Building capacity for evidence informed decision making in public health: a case study of organizational change. BMC Public Health. 2012;12:137.
48. Dobbins M, Hanna SE, Ciliska D, et al A randomized controlled trial evaluating the impact of knowledge translation and exchange strategies. Implement Sci. 2009;4:61.
49. Aarons GA, Ehrhart MG, Farahnak LR, Sklar M. Aligning leadership across systems and organizations to develop a strategic climate for evidence-based practice implementation. Annu Rev Public Health. 2014;35:255–274.
50. Ward M, Dobbins M, Peirson L. Lessons learnt from implementing an organizational strategy for evidence-informed decision making. Public Health Panor. 2016;2(3):249–400.
Keywords:

confirmatory factor analysis; evidence-based decision making; measurement; organization; public health

Supplemental Digital Content

Back to Top | Article Outline
Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.