Secondary Logo

Journal Logo

Factors Influencing Implementation of Youth Physical Activity Interventions

An Expert Perspective

Lau, Erica Y.; Wandersman, Abraham H.; Pate, Russell R.

Translational Journal of the American College of Sports Medicine: July 1, 2016 - Volume 1 - Issue 7 - p 60–70
doi: 10.1249/TJX.0000000000000006
Original Investigation
Free
SDC

ABSTRACT Little is known about the factors that influence implementation of physical activity interventions undertaken in youth-serving settings, and this lack of information impedes the development of effective implementation strategies. This study convened a panel of experts to identify factors that are most important in achieving successful implementation of physical activity interventions in youth-serving organizations. Five recognized experts participated in a four-round, modified Delphi consensus process. The panelists were asked to achieve consensus on a list of potential factors that are most important in predicting successful implementation and on the descriptions of these factors. They also provided estimates regarding the individual contributions each of the identified factors make in predicting a successful implementation. These estimates were then translated into Bayesian predictive models for factor selection. The expert panel achieved consensus on 23 factors. Results from the factor selection procedures indicated that a final model containing 15 factors yielded the greatest contributions in predicting successful implementation of youth physical activity interventions. In this final model, five factors were classified as organizational characteristics, six factors as implementation processes, two factors as provider characteristics, and two factors as program characteristics and community-level factors, respectively. In conclusion, this is the first study to identify factors that are important in achieving a successful implementation of youth physical activity interventions through systemically gathering information from a panel of experts. The 15 factors identified in this study provide important information to inform implementation planning and evaluation of future interventions.

1School of Kinesiology, University of British Columbia, Vancouver, BC, CANADA; 2Department of Psychology, University of South Carolina, Columbia, SC; and 3Department of Exercise Science, University of South Carolina, Columbia, SC

Address for correspondence: Erica Y. Lau, Ph.D., School of Kinesiology, Faculty of Education, University of British Columbia, 2146 Health Science Mall, Room 4604, Vancouver, BC, Canada V6T 1Z3 (E-mail: erica.lau@ubc.ca).

Back to Top | Article Outline

INTRODUCTION

Achieving optimal program implementation is challenging in many field-based health promotion interventions, including physical activity interventions carried out in youth-serving organizations (YSO). For example, a school-based intervention targeting physical activity in high school girls showed that 41% of the intervention schools did not achieve the intended levels of implementation (21). A preschool-based study found that 30% of the preschool teachers did not deliver the intervention components as planned (1). And a community-based intervention targeting physical activity of children living in residential children's homes also reported that 40% of the intervention homes did not meet the implementation criteria (8,20). Researchers have suggested that suboptimal program implementation may dilute intervention effects, thus masking the potential benefits of a program (4,6,9–11). Therefore, it is important to understand the factors that influence program implementation so that researchers can develop more effective implementation strategies.

A plethora of theoretical frameworks describe a substantial number of factors that influence program implementation. A major limitation of these existing frameworks is that they were developed primarily based on the literature of health services programs and preventive interventions (e.g., drug abuse or tobacco prevention programs), which may not be fully applicable to youth physical activity interventions. A recent systematic review (17) identified 22 factors that influence program implementation of school-based physical activity interventions. The authors reported that 36% of the identified factors were not included in the Durlak and DuPre framework (10). We compared their findings with two widely used frameworks and found that 59% of the factors were not in the Greenhalgh model (13) and 70% were not in the Consolidated Framework for Implementation Research (CFIR) (5). These findings indicate that the existing frameworks may have overlooked some factors that are important in influencing program implementation of physical activity interventions in YSO.

Moreover, there is little evidence to support the operationalization of the existing factors within the context of youth physical activity interventions. For example, administrative support emerges as an important factor across existing frameworks. However, it is unclear what types of support (e.g., communication of support, permissions to engage in implementation activities) are needed to promote implementation of physical activity interventions in YSO. A lack of clear understanding could impede the development of relevant strategies for modifying these factors. Furthermore, the existing frameworks offer many factors but little guidance to help determine which factors are most relevant to implementation of physical activity interventions in YSO. Without such guidance, researchers and practitioners cannot develop tailored training and technical assistance to support quality implementation, and organizations cannot make informed decisions on which areas they need to focus their time and resources to maximize the chance of achieving successful implementation.

To advance implementation science for youth physical activity interventions, we need a model that describes and operationalizes factors influencing program implementation within the context of youth physical activity interventions. The model also should provide information for researchers, practitioners, and policy makers to systematically select factors that are most relevant to their intervention settings.

One of the solutions is to create a predictive model delineating factors that are hypothesized to be most important to successful implementation and provide the weights of each identified factor in contributing to the likelihood of successful implementation. Ideally, the development of the model will be based on empirical data collected from large-scale intervention studies with comprehensive assessments on both level of implementation and its associated factors. Unfortunately, such empirical data are lacking in the field. An alternative approach that has been used successfully in other health-related programs is to elicit the knowledge from a group of experts in the form of a mathematical model (3,12,14–16,24). The purpose of the current study was to convene a panel of experts to develop a predictive model for identifying factors that are most important in achieving successful implementation of physical activity interventions in YSO.

Back to Top | Article Outline

METHODS

A Bayesian Predictive Model

We selected the Bayesian approach to develop the mathematical form of the predictive model because the application of Bayes' rule permitted us to translate expert knowledge into estimates for quantifying the importance of each identified factor in contributing to the likelihood of successful implementation (25). Also, the Bayesian model has no constraint on probability distribution, so it does not require extensive empirical data to obtain sufficient statistical power to construct a stable, reliable predictive model (25). These features make the Bayesian model particularly suitable to the current study, because empirical studies are lacking in the field.

In its simplest form, a Bayesian model assumes a dichotomous outcome, which for this study was successful or unsuccessful implementation of the intervention. To create the model, the expert panelists provided the following information (3,14): 1) an operational definition of successful implementation, 2) a set of conditionally independent factors that are important in predicting successful implementation, 3) likelihood ratios for each of the identified factors, and 4) estimates for testing internal validity of the model. These components are described in detail in this paper.

Back to Top | Article Outline

Expert Panel Participants

Previous studies (3,14,16) suggested that a panel size between five and seven members would provide optimal information for developing a Bayesian predictive model. To select the panel, we used a purposive sampling procedure (7). To ensure consistency of expertise levels within the panel, we targeted senior researchers who have substantial experience in implementing youth physical activity interventions. Eligibility criteria for the panelists were as follows: 1) academic appointment at the rank of associate professor or higher, 2) a track record of leading implementation of youth physical activity interventions, and 3) a demonstrated record of publications on process evaluation and implementation of youth physical activity interventions.

Based on the eligibility criteria, members of the study team generated a list of six panelists by reviewing journal articles and faculty biographical descriptions on university websites and consulting with senior researchers. Because the objective of this study was to identify a set of core factors that influence implementation of physical activity interventions across YSO, we attempted to obtain a balance of individuals with expertise in various settings, such as schools and communities. Invitation letters were sent to six researchers. Five agreed to participate and one did not respond. The final list of panelists consisted of four professors and one professor emeritus. These panelists had an average of over 20 years of experience in implementing, monitoring, evaluating, and publishing results of youth physical activity interventions in a variety of settings, including preschools, schools, afterschool programs, and communities (e.g., summer camps and scout troops).

Back to Top | Article Outline

Data Collection

The five experts participated in a four-round, modified Delphi process (2,19,22). In the first round, the panelists completed an online survey to independently define successful implementation and suggest factors that influence successful implementation. The second round provided a group setting (video conference) in which the panelists elaborated and discussed their views on the definition and potential factors of a successful implementation. The third round required the panelists to complete another online survey to independently rate the importance of the potential factors. The fourth round involved a final online survey that collected data for assessing test–retest reliability of panelists' ratings. The design of the surveys and video conference were guided by previous studies (15,17). Data were collected between February and May 2015. The Institutional Review Board at the University of South Carolina approved all study procedures.

Back to Top | Article Outline

First Round

The online survey consisted of seven open-ended questions that required the panelists to 1) operationalize successful implementation of a physical activity intervention carried out in YSO based on their own experience, 2) suggest six factors that are most important in predicting successful implementation, and 3) describe the suggested factors at the three factor levels: high, moderate, and low. For example, a panelist would specify an organization's physical activity culture as the factor followed by describing that a high-level physical activity culture refers to physical activity that is central to the organization's mission and that currently offers physical activity programs; a moderate-level physical activity culture refers to physical activity that is not central to the organization's mission but currently offers physical activity programs; and a low-level physical activity culture refers to physical activity that is not central to the organization's mission and currently does not offer any physical activity programs. Responses were aggregated and summarized into an initial model containing all the factors suggested by the panelists, and the model was circulated among the panelists for review before the video conference.

Back to Top | Article Outline

Second Round

All panelists participated in a 90-min video conference 1 month after the first survey. Part 1 of the video conference provided opportunities for the panelists to elaborate and discuss their thoughts about 1) the definition of successful implementation, 2) the importance of including the suggested factors, and 3) ways to improve descriptions of the suggested factors. Consensus on these three items was achieved through an iterative process of voting and discussion. In part 2 of the video conference, the panelists evaluated the conditional independence of each of the potential factors. The panelists were told to assume that an organization achieved a successful implementation and that the organization was rated as having a high level on a specific factor, such as leadership support. They then discussed whether knowing this piece of information tells them a lot about how the organization might have responded to any of the other factors (14). If a factor violated the conditional independence, it was either rewritten or eliminated. This process was repeated for every potential factor.

After the video conference, the research team refined the initial model in light of the discussion. The revised model then was distributed to the panelists for final feedback. These procedures resulted in a final list of factors that informed the surveys used in the third and fourth rounds.

Back to Top | Article Outline

Third Round

This survey consisted of two sections. In section 1, the panelists estimated likelihood ratios of the final list of factors. The likelihood ratios are the weights of each identified factors in contributing to a successful implementation. Panelists were asked to assume that 100 hypothetical YSO had a successful implementation, and another 100 organizations had an unsuccessful implementation. They were told to distribute the 100 successful cases and the 100 unsuccessful cases among the three factor levels for each of the identified factors. A sample question is illustrated in Figure 1.

Figure 1

Figure 1

The likelihood ratios for each factor level were expressed in the ratios of conditional probability of observing the factor level of a specific factor (a datum, D) given a successful implementation, to the conditional probability of that same datum given an unsuccessful implementation: [P(D1i|S)/P(D1i|U)]. Using the example illustrated in Figure 1, the likelihood ratios for a factor called “implementer belief and motivation” would be 40/10 = 4/1 for the high factor level, 30/30 = 1/1 for the moderate factor level, and 30/60 = 1/2 for the low factor level. Final likelihood ratios for each factor level were obtained by averaging the individual estimates across the five panelists.

In section 2, the panelists provided estimates for testing internal validity of the predictive model. The predictive model would ideally be applied to predict a successful implementation in real cases, which is external validity. In the absence of a suitable empirical database, however, we used experts' opinions to generate a hypothetical data set for testing internal validity of the model. The panelists were asked to assume that a physical activity intervention was carried out in a sample of 60 YSO. Then, they were provided with a set of computer-generated, hypothetical profiles reflecting how the 60 organizations rated on the factors identified in the second round. Every profile included all of the identified factors, but with varying factor levels for each factor. Each panelist was asked to estimate how likely organizations with a specific profile would be to implement the intervention successfully, when taking all of the identified factors into account at the same time. These estimates are called “holistic ratings” (3). The holistic ratings were estimated by using a 0–100 scale, where 0 indicates absolutely no chance and 100 indicates 100% chance of successful implementation. A sample question is presented in Figure 1. A final group estimate for each profile was calculated by averaging the estimates across the five panelists (see Document, Supplemental Content 1, third round survey, http://links.lww.com/TJACSM/A7—the development of a Bayesian model to predict implementation success of physical activity interventions in YSO).

Back to Top | Article Outline

Fourth Round

Because the holistic ratings were used as a “criterion” for testing the internal validity of the Bayesian predictive model, it was important to establish the reliability of these ratings. Two weeks after the third round, panelists completed a final online survey to rerate 40 hypothetical profiles randomly selected from the original 60 profiles.

Back to Top | Article Outline

Analysis

Estimating Posterior Odds of Success

Posterior odds of success were estimated for each of the 60 hypothetical profiles used in the third round. Posterior odds of success were estimated by multiplying the prior odds of success by the products of factor-level likelihood ratios. The prior odds of success are ratios of prior probability of successful implementation to probability of unsuccessful implementation: [P(S)/P(U)]. Because of the lack of previous research to guide the specification of an informative prior, this study assigned a noninformative prior, which was 1/1 to the model (3). The factor-level likelihood ratios were obtained in round 3 of the modified Delphi process. Table 1 illustrates the use of a three-factor Bayesian model to estimate posterior odds of successful implementation for a specific hypothetical profile. This example shows that an organization with the profile given in Figure 1 has about 57% chance of being successful.

TABLE 1

TABLE 1

Back to Top | Article Outline

Test–Retest Reliability

Intraclass correlations (ICC) with a two-way random model were performed to examine test–retest reliability of holistic ratings obtained in the third and fourth rounds. ICC values of ≥0.75 indicate good reliability (23).

Back to Top | Article Outline

Internal Validity

Pearson product moment correlations were used to assess internal validity of the Bayesian model. The holistic ratings were correlated with model-derived posterior odds of success for each of the hypothetical profiles. A higher correlation value indicates better capability of the model in capturing the panelists' judgment.

Back to Top | Article Outline

Factor Selection

First, a diagnostic power score was calculated for each factor to serve as a criterion for factor selection. The diagnostic power score refers to the range between the largest and the smallest likelihood ratio for that factor. If the highest and lowest likelihood ratios for a factor are 2.5/1 and 1/10, its diagnostic power would be 2.5 + 10 = 12.5. This score provides a crude measure regarding the amount of information that a certain factor can provide compared with other factors, with a larger value indicating that a factor is more informative (3).

A backward factor selection procedure was used in an attempt to reduce the final list of factors to those that are most important in predicting successful implementation. We started with a full model consisting of all factors identified in the third round and dropped one factor that had the lowest diagnostic power score at a time. A factor was removed from the model if dropping it led to an increased or unchanged internal validity. The procedures were repeated for every identified factor until internal validity no longer improved. All data analyses were conducted using SPSS version 20.0 (IBM, Armonk, NY).

Back to Top | Article Outline

RESULTS

Definition of Successful Implementation

Based on the data collected during the first online survey and the video conference, the panelists indicated that successful implementation means “the intervention is carried out as planned as measured by fidelity to the protocol,” in which “protocol” refers to the quality elements specified by the intervention developer that are believed to be responsible for the intervention's effects. Because of the lack of consistent findings in the literature, the panelists decided not to determine a specific criterion or cut-point of fidelity. However, they indicated that researchers should explicitly define the quality elements and specify the criteria of successful implementation that are most relevant for their particular study.

Back to Top | Article Outline

Factors Identified by the Expert Panelists for Predicting Successful Implementation

The factors identified and their descriptions are presented in Table 2. The panelists identified 23 factors, containing 69 factor levels, which are important in predicting successful implementation. We categorized the identified factors into five types, in which seven factors were classified as organizational characteristics, including leadership motivation and engagement, physical activity culture, available space, available facilities and equipment, available staff, communication, and competing programs in the organization. Nine factors were categorized as implementation processes, including needs assessment, goal setting, engaging intervention staff, engaging youth, engaging program champions, training, technical assistance, reflecting and evaluating, and sustainability plans. Two factors, provider belief and motivation and provider knowledge and skills, were categorized as provider characteristics. Three factors were related to program characteristics, including fun and inclusive design, empirical evidence, and adaptability. Finally, there were two community-level factors, parental support for physical activity, and competing programs in the community. In addition, the expert panel also produced detailed descriptions for each of the factors at three levels of influence on successful implementation: high, moderate, and low.

TABLE 2

TABLE 2

The factor-level likelihood ratios and diagnostic power scores for the 23 factors identified are presented in Table 3. Factors with the highest diagnostic power scores were leadership motivation and engagement and engaging intervention staff; factors with the lowest scores were parental support for physical activity and competing programs in the community.

TABLE 3

TABLE 3

Back to Top | Article Outline

Test–Retest Reliability

Estimates of holistic ratings obtained in the third and fourth rounds were strongly correlated (ICC = 0.88), indicating good reliability.

Back to Top | Article Outline

Internal Validity and Factor Selection

Estimates of holistic ratings obtained in the third and fourth rounds were strongly correlated (ICC = 0.88), indicating good reliability.

The correlation between the posterior odds of success derived from the 23-factor full model and the holistic ratings was 0.65 (P < 0.000), suggesting a moderate level of internal validity. With regard to factor selection, the backward selection procedures suggest that, among the comparison models, a final model that consisted of 15 factors yielded the highest level of internal validity (r = 0.76, P < 0.01). The eight factors eliminated were as follows: physical activity culture, communication, fun and inclusive design, empirical evidence, needs assessment, engaging youth, sustainability plans, and parental support for physical activity (Table 3).

Back to Top | Article Outline

DISCUSSION

This is the first study that used experts' opinions to identify the factors that influence program implementation within the context of youth physical activity interventions. An accomplished panel of experts with experience implementing and studying youth physical activity interventions engaged in a rigorous consensus process. During the first two rounds, the panelists identified and described 23 potential factors that are most important in achieving successful implementation of youth physical activity interventions. To further reduce the list of factors, we translated the experts' ratings into several Bayesian predictive models. Among these models, a final model that retained 15 of the identified factors yielded the greatest contributions in predicting successful implementation of the hypothetical profiles. Although external validity remains to be established, this 15-factor final model had good internal validity.

We expected that the final model would be composed of factors with the highest diagnostic power because these factors are posited to be most informative in explaining variations in successful implementation (3), but this was not the case in the present study. The final model eliminated three major factors with relatively high diagnostic power (i.e., needs assessment, physical activity culture, and fun and inclusive intervention design) but retained two factors with low diagnostic power (i.e., competing programs within the organization and competing programs in the community). A plausible explanation is that the two factors with low diagnostic power may have interacted synergistically with other factors in the model, thus outweighing the effects of the three major factors on successful implementation (18). These findings also indicate the importance of considering collective contributions rather than individual contributions of these factors in explaining successful implementation.

We compared our findings with three existing frameworks: 1) the CFIR framework (5) is the most researched framework and it contains the most comprehensive set of well-described factors influencing program implementation of health services programs; 2) the Durlak and DuPre framework (10), which delineates factors influencing implementation of health promotion interventions, including physical activity interventions, targeting children and adolescents; and 3) the factors identified in Naylor et al. (17), which describes factors influencing school-based physical activity interventions. Results from this comparison show that 12 factors identified in our final model are aligned with the CFIR (5), 7 factors are aligned with the Durlak and DuPre framework (10), and 8 factors were also identified by Naylor et al. (17). Overall, 13 out of 15 factors (87%) in our final model emerged in at least one of the comparison frameworks.

Although a majority of factors in our final model have been identified previously, our model adds value to the existing literature by illustrating that only 12 of the 31 factors (39%) delineated in the CFIR (5), 7 of the 23 factors (30%) in the Durlak and DuPre framework (10), and 9 of the 22 factors (41%) in the review of Naylor et al. (17) are relevant or made a significant contribution in predicting successful implementation of physical activity interventions in YSO. Furthermore, our model provides standardized terminologies and descriptions for each of the identified factors. Therefore, our model can promote more consistent conceptualizations of these influencing factors in the field, which should assist researchers, practitioners, and policy makers to develop effective implementation strategies and allow meaningful comparisons across future studies of physical activity in YSO.

In addition, our final model identifies two factors that did not emerge as important in the comparison frameworks. The first factor is competing programs within the organization. As suggested by the expert panelists, competing programs within the organization could promote or hinder program implementation because they have implications for the availability of resources and staffing. The panelists indicated that organizations that adopted multiple competing programs tend to be poorly coordinated in terms of scheduling, resource allocation, and staffing. The second factor is “needs assessment.” The exert panelists indicated that conducting a “needs assessment” could assist intervention developers to design context-specific implementation protocols, which should in turn improve levels of implementation. However, future studies are needed to explore whether these proposed factors apply to physical activity interventions across different YSO.

In our final model, a majority of the identified factors fall into the category of organizational characteristics and implementation processes. In the CFIR (5), most of the identified factors were related to organizational characteristics, implementation processes, and intervention characteristics, whereas most of the identified factors in the Durlak and DuPre framework (10) were related to organizational characteristics, provider characteristics, and community-level factors. In the systematic review conducted by Naylor et al. (17), the authors identified more factors in organizational characteristics and provider characteristics. These findings show that organizational characteristics are keys to achieving successful program implementation across disciplines. However, there is considerable heterogeneity in the types and number of factors within organizational characteristics across frameworks. These differences suggest the need for developing context-specific rather than generic models to predict successful implementation.

Back to Top | Article Outline

Limitations

Several limitations of this study should be considered. First, we were not able to incorporate the perspectives of the front-line staff who were responsible for day-to-day intervention operations, such as project coordinators or interventionists. We acknowledge that these individuals could provide valuable insights regarding factors that are most important in achieving successful implementation. Because of the high turnover in this population, it was difficult to recruit a group of individuals who possess similar levels of expertise and experience. However, we encourage other researchers to explore the perspective of this population. Second, the predictive model is considered preliminary because its external validity has not yet been established. However, previous studies (3,14–16) indicated that models developed through this systematic Bayesian approach could have good external validity. Third, as suggested in previous studies (3,14,15), the expert consensus process would ideally be conducted within a 2-d intensive in-person meeting. To ensure optimal participation of the expert panelists, we used a modified Delphi approach and the final model had good internal validity. Lastly, it is noteworthy that the interrelationships among the identified factors will vary across interventions with different designs, at different implementation stages, and in different implementation settings (5,10,13,18). Therefore, the list of factors identified in this study is not intended to be a prescriptive formula. Rather, it is intended to provide researchers with a set of core elements that they can adopt, modify, and test.

Back to Top | Article Outline

Implications and Future Studies

The current findings have immediate implications. The list of factors and their descriptions can be used by researchers and YSO as a formative assessment tool to guide the planning and evaluation of their implementation efforts. For example, the factor “training” listed in Table 2 provides clear descriptions of what high-quality training sessions would look like; these could be used to guide the development of staff training, and those descriptions can be translated into a rubric for evaluation. Moreover, researchers can adopt or modify the list of identified factors to suit their respective interventions.

This study is a step in the direction toward enhancing implementation of physical activity interventions carried out in YSO. Future studies will continue to refine the descriptions of the identified factors and establish external validity of the predictive model. Our goal is to produce a valid diagnostic tool, with a set of well-defined factors and an accompanying predictive model, which can be used by researchers and staff members in YSO to systematically assess and identify factors that may assist or impede implementation before the intervention begins. With such information, necessary resources can be made available to the organizations and capacity-building strategies can be tailored accordingly. Ultimately, better implementation planning may result in enhanced implementation fidelity, which may in turn improve program effectiveness.

The authors wish to acknowledge Drs. Thomas Baranowski, David Dzewaltowski, Thomas L. McKenzie, Dianne Ward, and Dawn K. Wilson for participating in the expert panel, and Dr. Ruth P. Saunders for her involvement in the conception and design of the study. The present study was not supported by any grant or by any external source of funding.

The authors declare that they have no competing interests. The results of the present study do not constitute endorsement by the American College of Sports Medicine.

Back to Top | Article Outline

REFERENCES

1. Alhassan S, Whitt-Glover MC. Intervention fidelity in a teacher-led program to promote physical activity in preschool-age children. Prev Med. 2014;69(Suppl 1):S34–6.
2. Bell DS, Marken RS, Meili RC, et al. Recommendations for comparing electronic prescribing systems: results of an expert consensus process. Health Aff (Millwood). 2004;(Suppl Web Exclusives):W4-305-17.
3. Bosworth K, Gingiss PM, Potthoff S, Roberts-Gray C. A Bayesian model to predict the success of the implementation of health and education innovations in school-centered programs. Eval Program Plann. 1999;22(1):1–11.
4. Chen HT. Practical Program Evaluation: Assessing and Improving Planning, Implementation, and Effectiveness. USA: Sage Publications, Inc.; 2005.
5. Damschroder LJ, Aron DC, Keith RE, et al. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4(1):50.
6. Dane AV, Schneider BH. Program integrity in primary and early secondary prevention: are implementation effects out of control? Clin Psychol Rev. 1998;18(1):23–45.
7. Devers KJ, Frankel RM. Study design in qualitative research—2: sampling and data collection strategies. Educ Health (Abingdon). 2000;13(2):263–71.
8. Dominick GM, Saunders RP, Dowda M, et al. Effects of a structural intervention and implementation on physical activity among youth in residential children's homes. Eval Program Plann. 2014;46:72–9.
9. Durlak JA. Why program implementation is important. J Prev Interv Community. 1998;17(2):5–18.
10. Durlak JA, DuPre EP. Implementation matters: a review of research on the influence of implementation on program outcomes and the factors affecting implementation. Am J Community Psychol. 2008;41(3–4):327–50.
11. Dusenbury L, Brannigan R, Falco M, et al. A review of research on fidelity of implementation: implications for drug abuse prevention in school settings. Health Educ Res. 2003;18(2):237–56.
12. Gingiss PM, Roberts-Gray C, Boerm M. Bridge-it: a system for predicting implementation fidelity for school-based tobacco prevention programs. Prev Sci. 2006;7(2):197–207.
13. Greenhalgh T, Robert G, Macfarlane F, et al. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629.
14. Gustafson DH, Sainfort F, Eichler M, et al. Developing and testing a model to predict outcomes of organizational change. Health Serv Res. 2003;38(2):751–76.
15. Gustafson DH, Sainfort F, Johnson SW, et al. Measuring quality of care in psychiatric emergencies: construction and evaluation of a Bayesian index. Health Serv Res. 1993;28(2):131–58.
16. Molfenter T, Gustafson D, Kilo C, et al. Prospective evaluation of a Bayesian model to predict organizational change. Health Care Manage Rev. 2005;30(3):270–9.
17. Naylor PJ, Nettlefold L, Race D, et al. Implementation of school based physical activity interventions: a systematic review. Prev Med. 2015;72:95–115.
18. Nilsen P. Making sense of implementation theories, models and frameworks. Implement Sci. 2015;10:53.
19. Pitt E, Kendall E, Hills A, et al. Listening to the experts: is there a place for food taxation in the fight against obesity in early childhood? BMC Obes. 2014;1(1):1–9.
20. Saunders RP, Evans AE, Kenison K, et al. Conceptualizing, implementing, and monitoring a structural health promotion intervention in an organizational setting. Health Promot Pract. 2013;14(3):343–53.
21. Saunders RP, Ward D, Felton GM, et al. Examining the link between program implementation and behavior outcomes in the lifestyle education for activity program (LEAP). Eval Program Plann. 2006;29(4):352–64.
22. Shekelle PG, Kahan JP, Bernstein SJ, et al. The reproducibility of a method to identify the overuse and underuse of medical procedures. N Engl J Med. 1998;338(26):1888–95.
23. Shrout PE, Fleiss JL. Intraclass correlations: uses in assessing rater reliability. Psychol Bull. 1979;86(2):420–8.
24. Wen KY, Gustafson DH, Hawkins RP, et al. Developing and validating a model to predict the success of an IHCS implementation: the Readiness for Implementation Model. J Am Med Inform Assoc. 2010;17(6):707–13.
25. Zyphur MJ, Oswald FL. Bayesian estimation and inference: a user's guide. J Manag. 2013.

Supplemental Digital Content

Back to Top | Article Outline
© 2016 American College of Sports Medicine