Share this article on:

Innovative Methods for Designing Actionable Program Evaluation

Nesbit, Brandon, MPH; Hertz, Marci, MPH; Thigpen, Sally, MPA; Castellanos, Ted, MPH; Brown, Michelle; Porter, Jamila, DrPH; Williams, Amber

Journal of Public Health Management and Practice: January/February 2018 - Volume 24 - Issue - p S12–S22
doi: 10.1097/PHH.0000000000000682
Research Reports: Research Full Report

Context: For most programs, whether funded through governmental agencies or nongovernmental organizations, demonstrating the impact of implemented activities is vital to ensuring continued funding and support.

Objective: Program evaluation is a critical tool that serves the dual purpose of describing impact and identifying areas for program improvement. From a funder's perspective, describing the individual and collective impact of state-based programs can be challenging due to variations in strategies being implemented and types of data being collected.

Design: A case study was used to describe the actionable, mixed-methods evaluation of the Core Violence and Injury Prevention Program (Core VIPP), including how the evaluation design and approach shifted to address evolving challenges faced by award recipients over time. Particular emphasis is given to innovative methods for collecting, analyzing, and disseminating data for key state and federal stakeholders.

Results: The results of the Core VIPP evaluation showed how this funding played a vital role in building injury and violence prevention capacity in state health departments, leading to a decrease in both intermediate and long-term outcomes.

Conclusions: The lessons learned through the mixed-method evaluation of the Core VIPP informed the structure of the subsequent funding cycle (Core SVIPP) to include more prescriptive requirements for evidence-based implementation and a state support team structure for delivery of training and technical assistance.

Division of Analysis, Research and Practice Integration, National Center for Injury Prevention and Control, Centers for Disease Control and Prevention, Atlanta, Georgia (Messrs Nesbit and Castellanos and Mss Hertz, Thigpen, and Brown); and Safe States Alliance, Atlanta, Georgia (Dr Porter and Ms Williams).

Correspondence: Brandon Nesbit, MPH, Division of Analysis, Research, and Practice Integration, National Center for Injury Prevention and Control, Centers for Disease Control and Prevention, 4770 Buford Hwy, Atlanta, GA 30341 (vxw6@cdc.gov).

The authors thank Core State Violence and Injury Prevention Program State Grantees and Shenee Bryan.

The authors declare no conflicts of interest.

The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Web site (http://www.JPHMP.com).

Unintentional and violence-related injuries and their consequences are the leading causes of death for the first 4 decades of life, regardless of gender, race, or socioeconomic status.1 More than 192 000 individuals in the United States die each year as a result of unintentional injuries and violence, and more than 27 million others suffer nonfatal injuries requiring emergency department visits each year.1 The total costs of fatal and nonfatal injuries in 2013 were estimated to be $671 billion in medical and work-loss costs.2 Many events that result in injury and/or death could be prevented if evidence-based public health strategies, practices, and policies were used throughout the nation. Effective performance management (especially through rigorous and real-time monitoring, evaluation, and program improvement) is key to program implementation.3 This article discusses how performance management has been key in successfully applying program funding to improve health outcomes.

The Centers for Disease Control and Prevention's (CDC's) National Center for Injury Prevention and Control (Injury Center) is committed to reducing injuries, violence, and disabilities by working with partners to provide leadership in identifying priorities, promoting prevention strategies, developing useful tools, and monitoring the effectiveness of injury and violence prevention (IVP) program activities. Although recognized as public health issues for several decades, state health department infrastructure and funding for violence and unintentional injuries remain limited.4 Monitoring health status to identify and solve community health problems is the first of the 10 essential functions of public health services, but state IVP programs have historically reported lacking access to an epidemiologist, statistician, or other data professional.4 Thus, the purpose of the Core Violence and Injury Prevention Program (Core VIPP), funded by the CDC from 2011 to 2016, was to assist state health departments in building and/or maintaining effective delivery systems for dissemination, implementation, and evaluation of best practice IVP programs and policies. The evaluation was designed to capture the impact of strengthening public health infrastructure and implementing evidence-based strategies. In addition, the evaluation was designed to reflect community context (partnerships, policy realities, and a wide range of quality in evidence); rapid-cycle feedback was critical to successful, nimble implementation. This actionable design offered opportunities to exchange rapid feedback with grantees and allowed the evaluation team—comprising staff from the Safe States Alliance, the Society for Advancement of Injury Research, and the CDC Evaluation and Integration Team—to (1) capture changes in grantees' capacity to implement IVP work; (2) describe grantees' impact on IVP-related outcomes; and (3) implement continuous quality improvement processes.

Back to Top | Article Outline

Background

Twenty state health departments were each funded at $250 000 per year for the Core VIPP. State grantees were required to work with a collaborative group to examine surveillance data and identify at least 4 IVP topics as their work focus area. The Funding Opportunity Announcement (FOA)5 outlined 5 goals for performance measures: (1) enhancing IVP program infrastructure; (2) collecting and analyzing data; (3) supporting and evaluating program and policy interventions; (4) affecting policy; and (5) program evaluation. It was vital that resulting evaluation not only track these performance measures but also able to determine the impact of the funding across the project period. To accomplish this, an appropriate evaluation design had to be developed in advance and continually improved throughout the project.

Back to Top | Article Outline

Designing the Evaluation

A systematic evaluation of the Core VIPP was designed using the CDC Framework for Program Evaluation. The framework encourages comprehensive and inclusive development of program evaluation that is a parallel, integrated process to the program development and design.6 The framework offers means to improve, track, and monitor public health actions at all levels of implementation. It encourages an evaluation approach that is integrated with routine program operations that apply to both federal program and state health department activities. Focusing on this integrated approach, the Core VIPP evaluation emphasized practical, ongoing evaluation strategies involving all program stakeholders, as well as evaluation experts.

Evaluating Core VIPP served multiple purposes: monitoring progress, revealing health impact, and qualitatively and quantitatively describing changes in state health department capacity over 5 performance measure categories (program infrastructure; collecting and analyzing data; supporting and evaluating program and policy interventions; and using evidence to inform policy). See Table 1 and Figure 1 describing the evaluation framework and data sources used in this evaluation. Evaluation questions were both process- and outcome-oriented:

FIGURE 1

FIGURE 1

TABLE 1

TABLE 1

Back to Top | Article Outline

Process

  1. To what extent are states reaching performance measures identified in the FOA? What are facilitators and barriers?
  2. To what extent are states making measurable progress toward their Focus Area Objectives? (Objectives should be reported in the SMART format (Specific, Measurable, Attainable, Realistic, and Time-Bound). What are the facilitators and barriers?
Back to Top | Article Outline

Outcome

  1. At the end of the 5-year period, to what extent were the Focus Area SMART objectives achieved?
  2. Over the 5-year period, have states maintained capacity for IVP, have they decreased IVP capacity, or have they increased their IVP capacity?
  3. What is the relationship between capacity-building components and states reaching their Focus Area SMART objectives?
Back to Top | Article Outline

Capacity building

Organizational capacity is considered essential for public health agencies to ensure that health promotion and prevention programs are sustained over extended periods of time necessary to sufficiently determine effectiveness. Assessing state health department infrastructure is often challenging. Lacking funding to conduct on-site assessments in every funded state, the evaluation team and an advisory group of IVP practitioners and researchers known as the Evaluation Expert Panel developed indicators to measure grantees' organizational capacity over time, starting with a baseline assessment in year 1.

Through an iterative process, the evaluation team and the Evaluation Expert Panel identified, reviewed, and discussed a variety of potential indicators in alignment with the proposed criteria. Nine indicators were ultimately selected to measure organizational capacity. The “Capacity Indicator Questionnaire” was developed to measure these indicators via a 27-item Web-based questionnaire.

Using a point system with a total of 18, grantees were categorized as “low” (0 point), “moderate” (1 point), or “high” (2 points) capacity for each of the 9 individual indicators based on their questionnaire responses. Overall capacity was calculated across all indicators based on the range of total possible points for each individual indicator (totals for low capacity = 0-5 points; moderate capacity = 6-12 points; and high capacity = 13-18 points).

Back to Top | Article Outline

Measuring health impact

All state grantees were required to develop and submit proximal and distal objectives using the SMART format to measure the impact of their prevention efforts. Proximal objectives measured behaviors, attitudes, and knowledge change, whereas distal objectives measured changes related to morbidity and mortality outcomes. While this information was collected in a standardized format, to enable the selection of measures that were most reflective of state burden and context, states were given flexibility to determine their own objectives. Grantees used a standardized annual progress report template to capture both quantitative and qualitative data related to their objectives. Information collected on the annual progress report template included the types of strategies being implemented, implementation progress, and metric updates for grantee-defined proximal and distal objectives reported from baseline through year 5 of the project. As this new approach to reporting was unfamiliar to many previously funded states, awardees initially resisted it, but through continuous technical assistance (TA) and open dialogue, the templated reporting structure became well received by the end of the project period. Collecting data in a standardized way allowed the evaluation team to conduct analyses describing the impact of the program as a whole.

Grantees were required to submit success stories annually related to each of the 4 topic focus areas they addressed through the Core VIPP. Grantee success stories both illustrated achievements that occurred as a result of their program and policy efforts and qualitatively demonstrated health impacts in greater detail than the annual progress report template allowed. Success stories helped highlight grantees' key achievements, provided a contextual narrative to support quantitative analyses, and were used by both grantees and the CDC to share the positive impacts of program funding among internal and external stakeholders. Key state program staff members were also invited to participate in key informant interviews annually with the evaluation team. Interviews enabled better understanding of facilitators and barriers of grantees' performance, existing organizational capacity, and perceived ability to achieve health impact. Figure 1 shows the data sources for the evaluation components previously mentioned.

Back to Top | Article Outline

Applying the Evaluation Framework: Enhancing the Core VIPP Evaluation

Application of the Rapid Synthesis and Translation Process

The evaluation team reviewed annual data submitted by grantees through the Capacity Indicator Questionnaire and the annual progress report. Using this review, the team applied a continuous quality improvement process through the lens of systems thinking, as described by Smith and Wilkins7 in this supplement, to determine ways to enhance the implementation and evaluation of Core VIPP. For instance, in reviewing grantee applications, the first year of data showed that some states were not implementing strategies that were likely to be impactful. While the FOA did not identify specific strategies for implementation, it did indicate that selected strategies should be based upon the Best Available Research Evidence (BARE). This was a challenge for awardees, as some topic areas have more rigorous evidence than others, and states were unable to directly implement different strategies for each of the 4 focus areas due to funding limitations. Instead, in many cases, they provided surveillance data informing program development or supporting evaluation. A review of their strategies had to account for these nuances. The CDC adapted the Rapid Synthesis and Translation Process (RSTP) to synthesize existing research, gray, and best practice literature. The RSTP was originally developed as a process for systematically moving knowledge into actionable knowledge.8 The adaptation of the RSTP placed emphasis on inclusion of both the research and practice perspectives. It enabled the CDC to provide specific feedback to grantees regarding how well their strategies aligned with BARE and whether they could be successfully implemented and expected to achieve the outcomes of interest. In partnership with CDC subject matter experts, a comprehensive registry of programmatic and policy strategies based on BARE was developed. State-proposed strategies were reviewed for alignment with the evidence base, and feedback was provided to grantees. BARE analysis of grantee annual progress reports placed strategies into 1 of 5 categories: (1) BARE strategy; (2) supportive of BARE; (3) building capacity for BARE; (4) none, and (5) not enough information (Table 2). This analysis was conducted annually to identify where TA could be provided around strategy development and implementation and to determine movement of CDC-funded state activities into alignment with the current evidence base.

TABLE 2

TABLE 2

Early in the BARE analysis of year 1 annual progress reports, it became evident that many grantees were not offering enough detail about their strategies and/or activities to determine the evidence base. In the Table 2 example, without information on what was meant by educating policy makers, it was impossible to determine whether the activity was well informed by the evidence base. If the education was related to home visitation, for example, that would be coded as an activity supportive of BARE. If the education was simply delivering a fact sheet related to child abuse and neglect, this would be coded as “none” and TA would be delivered to shift toward activities that are supported by evidence to have a desired impact on health outcomes. This focused TA with grantees after year 1, leading to marked improvement in quality of year 2 annual progress report data.

Back to Top | Article Outline

Tracking progress and technical assistance

Annual reporting is vital to identifying and communicating success of funded programs, but a great deal of context is often missed if that is the only way of capturing awardee information. Improvements to tracking and monitoring processes for TA were included in the evaluation approach. For instance, project officers held monthly calls with all funded states, during which a great deal of information was discussed. However, no uniform structure was in place to capture and query the information necessary to inform ongoing, collective evaluation and TA efforts across states. To address this issue, a system known as the Monitoring and Evaluation Tool was developed. The Monitoring and Evaluation Tool allows state project officers to enter and query awardee information. This information was vital in providing real-time updates between reporting periods and monitoring the type and quantity of TA requested by states. The Monitoring and Evaluation Tool helped the evaluation team identify common issues across grantees and allowed for the development of proactive group TA to reduce project officer burden in addressing each request individually. Because of information gathered from the early evaluation findings, monthly state calls were modified to include both a CDC project officer and a CDC evaluation officer, making evaluation a consistent area of discussion during monthly grantee calls, leading to enhanced state evaluation efforts.

Back to Top | Article Outline

Evaluation institute and evaluation community of practice

Grantee self-reported data from the Capacity Indicator Questionnaire indicated a need for more TA related to using and disseminating evaluation findings associated with interventions. In response, the evaluation team created and implemented the Injury and Violence Prevention Program & Policy Evaluation Institute (“Evaluation Institute”) from 2014 to 2016. Jointly hosted by the Safe States Alliance and the American Public Health Association (APHA) and funded by the CDC's Injury Center, the Evaluation Institute was a 4-month training initiative designed to help state public health department program staff and their partners strengthen skills for high-quality evaluations of IVP interventions. With support from coaches and evaluation advisors, participating Evaluation Institute teams (comprising a team leader from the state health department and 3-4 additional team members from other agencies and/or disciplines) developed and began implementing an IVP policy or program evaluation plan. From 2014 to 2016, a total of 102 state and local practitioners, comprising 23 teams from 19 states, participated in the Evaluation Institute and addressed injury- and violence-related topics across the field. As a result of Evaluation Institute participation, more than 90% of participants enhanced their ability to conduct high-quality evaluations and nearly all participants increased their knowledge and application of evaluation-related skills.9 Interviews with participants from 14 Evaluation Institute teams from 2015 linked team success in starting or completing evaluations due to 6 key strengths of the institute: (1) protected planning time with team members; (2) partnerships with key stakeholders; (3) networking opportunities with fellow states; (4) an evaluation plan template; (5) rigorous methods; and (6) TA.

In addition to the Evaluation Institute, the Safe States Alliance and the APHA collaborated to develop an online evaluation community of practice to support ongoing discourse, collaboration, skill building, and exchange of evaluation-related knowledge between IVP partners and practitioners. The community of practice included forums where participants could post announcements, tools and resources, funding and training opportunities, and other news and information. As of September 2016, there were nearly 170 members in the evaluation community of practice and more than 50 tools/resources had been shared to support IVP practitioners.

Back to Top | Article Outline

Communicating and disseminating results

The evaluation team developed state-specific evaluation reports annually that summarized individual evaluation findings for each grantee. These reports were designed to accommodate the grantees' desires for timely and consistent feedback for continuous program improvement. To inform grantees of their progress toward meeting program goals, each report supported findings from the 3 conceptual elements of the evaluation:

  • Performance measures: Reports conveyed the percentage of grantee activities completed each year by goal area, as reflected in their annual progress reports.
  • Organizational capacity: Reports conveyed a summary of the 9 capacity indicators, as measured by the Capacity Indicator Questionnaire and the BARE analysis. Grantees were provided an overall and individual categorization of “low,” “moderate,” or “high” capacity for each indicator.
  • Focus area health impact: Reports conveyed grantee progress toward achieving SMART objectives. Objectives were categorized as “met,” “progress,” “regress,” or “missing data.”

Reports were disseminated to states accompanied by a tailored TA call to review and gain insights from each state. In addition, aggregate evaluation reports were developed both after the first 2 years of funding and in the last year of the 5-year cooperative agreement. These aggregate reports attempted to communicate the success of the program as a whole. To effectively communicate program successes to internal and external stakeholders, the CDC used data visualization software to illustrate beyond the impact of state-identified proximal and distal objectives. For example, after reviewing narrative reports for key themes, it was noted that many states spent significant time educating legislators. By examining state policies, we were able to visually show an increase over each funding year for the number of organizational, regulatory, and legislative policies that moved into alignment with the evidence base. Evidence-based policies have been shown to have the greatest impact on reducing negative outcomes of interest including prevention of injury and death. Stakeholders appreciated both the quantitative measures of program impact and the qualitative success stories.

Back to Top | Article Outline

Results

The impact of Core VIPP funding in the 3 program goal areas was clear following completion of the evaluation (Table 3).

TABLE 3

TABLE 3

Regarding capacity, at the project beginning, one state's overall score was low capacity and 8 states were high capacity; by year 4, no states were at low capacity and 15 were at high capacity (Figure 2). While this improvement in capacity was certainly deemed a success, evaluators were unable to determine what, if any, relationship existed between state capacity level and achievement of objectives due to the small number of funded states, limited range in state capacity, and occasionally incomplete data. Given that states that successfully competed for funding were likely at a higher capacity level to begin with, it is a challenge to disentangle funding for capacity and achievement of objectives.

FIGURE 2

FIGURE 2

From a health impact perspective, program success was demonstrated through the achievement of proximal and distal objectives. Proximal achievement rose steadily each year, whereas distal achievement fluctuated yearly but stayed fairly steady across the 5-year funding period (see Supplemental Digital Content, available at http://links.lww.com/JPHMP/A378). While this approach helped showcase the program's success, it still did not allow us to fully identify the impact on reducing morbidity and mortality through changing behavior.

As previously mentioned, the FOA did not identify specific strategies for implementation but did indicate that state-selected strategies should be based upon BARE. Implementation of evidence-based strategies improved across the 5-year funding cycle (Figure 3), largely due to TA and training provided by CDC project officers, evaluators, and the Safe States Alliance.

FIGURE 3

FIGURE 3

Summarized qualitative findings demonstrate that grantees identified funding as essential in helping their agencies enhance IVP efforts in the program's first 4 years. Some grantees described how Core VIPP not only facilitated IVP programs to maintain current funding and external partners but also positioned them for successfully applying for funding from other sources.10 On the other end of the spectrum, several grantees emphasized that many states are dependent on Core VIPP funding, which is often inexorably linked to the existence of their IVP programs.

Back to Top | Article Outline

Discussion (Lessons Learned)

While the theoretical frameworks and approaches discussed earlier served the needs of the Core VIPP well, they were not without room for improvement. In this section, lessons learned and planned enhancements for future program evaluations are discussed.

Back to Top | Article Outline

Measuring capacity

Despite the thoughtful and rigorous process used to develop the Capacity Indicator Questionnaire indicators, interpreting the findings was a challenge due to the inconclusive nature of the results. Ascertaining exactly why capacity levels increased or decreased was at times difficult. The questions were subjective by nature and open for individual interpretation; because of staff turnover and changing roles, the survey was frequently completed by different individuals year to year, making it difficult to determine whether score changes reflected actual shifts in capacity or merely a change in question interpretation. In addition, because of competing deadlines and priorities, not all states completed the entire survey each year, resulting in a move away from capacity measurement via online questionnaire for evaluation purposes. Ultimately, evaluators learned that “organizational capacity” can be difficult to operationalize and measure, with potentially multiple definitions for each specific organizational capacity “indicator.” For example, funding capacity was defined by the degree to which grantees had “diversified” funding sources (ie, funding from federal, state, or other sources). However, other potential definitions that were discussed included the number of funding sources the grantee had or the length of time that the grantee had successfully received the funding source. This lack of clarity confirmed that organizational capacity is a complex concept that can be defined differently in various contexts. As a result, it was emphasized throughout the 5-year evaluation that the capacity indicators were developed specifically for this grant and may not be transferable to the IVP field as a whole. Given that this was one of the first efforts to broadly define and measure specific elements of state IVP program capacity, rather than assigning standard indicators, it was intended that these indicators would serve as a springboard for furthering efforts to more precisely define and measure organizational capacity in state IVP programs and other public health institutions. A key lesson learned was that measuring capacity through a quantitative survey alone can be difficult and often inconclusive. If conducting a survey, it is vital to collect additional contextual information to fully understand state health department capacity. These combined approaches provided a more comprehensive and valid picture of state health department capacity by the end of the funding.

Back to Top | Article Outline

Demonstrating impact

A major challenge that arose near the end of the 5-year project period was describing the aggregate impact of Core VIPP when states were implementing many different strategies and using many different data sources to monitor progress. To better communicate the success of the program, an analysis was conducted to measure the progress states made in moving from baseline toward their identified goals. This illustrated the success of the program as a whole through the percentage of objectives that were met annually across all states. The results of this analysis were communicated in aggregate by proximal and distal objectives, as well as broken out by the 5 priority areas of the Injury Center being addressed (see Supplemental Digital Content, available at http://links.lww.com/JPHMP/A378).

Although state-defined objectives allowed awardees to select flexible measures applicable to their context, the large variability across states presented a challenge in aggregating up to describe program impact in totality. While data were collected in a quantitative, standardized manner, the objectives themselves were still not standard across states. The key lesson learned was that in addition to standardized data collection, a set of predefined indicators need to be developed and required of all awardees during annual reporting. This provides the ability to aggregate data across states and report on specific measures, rather than general progress made across varying measures. This approach was implemented in the next edition of this 5-year funding. Utilizing a mixed-methods approach is essential to support quantitative data with narrative stories that really bring the data to life. This approach proved effective in demonstrating impact to stakeholders. In addition, when discussing impact, it was often important to frame results as contribution of the program to overall efforts in a state versus attribution to Core VIPP funding specifically. Many states were partnering with other organizations on implementation efforts to leverage resources and funds to have a greater impact than each organization individually would have had.

Back to Top | Article Outline

Communicating results

As previously discussed, awardee data were disseminated in a variety of ways to key internal and external stakeholders. It is imperative to discuss stakeholder reception of these approaches and challenges that arose. Internally, evaluation findings were shared through quarterly program review and annual budget review sessions with center and agency leadership. While this was necessary for a structured review, we also needed a way for leadership and staff to quickly access state data and pull specific information as needed to answer ad hoc requests. To address this, topic-specific 1-pagers were developed and shared with targeted audiences as appropriate. In addition, evaluation data were shared with all internal stakeholders through the development of Tableau dashboards. These dashboards allowed users to interact with data to formulate their own conclusions and conduct exploratory analysis providing much greater insight than static reports. As discussed earlier, communicating aggregate quantitative evaluation results was challenging. However, quite a bit of qualitative evaluation data were collected through grantee interaction and reporting. It became apparent early on that successfully communicating about the program required us to provide indications of movement via quantitative data supported by individual qualitative state examples. The collection of state success stories played a critical role in framing program successes; however, there were lessons to be learned. We moved to a templated structure to allow grantees to submit their most relevant information and provide descriptive narratives. This descriptive narrative helped bring the quantitative data to life by telling a story with which stakeholders related. This approach allowed grantees to increase visibility and enhance program efforts through successful communication of accomplishments.

Evaluation results were communicated back to grantees through aggregate evaluation reports, individual state evaluation reports, and monthly project officer calls. These reports were intended to assist in state programmatic improvement as well as promote peer-to-peer collaboration across states. It was found, however, that individual reports that simply provided back previously submitted state data were often not beneficial for a state. In addition, 50+-page aggregate evaluation reports were deemed as not useful and oftentimes not read by stakeholders. Reports were frequently not finalized and disseminated to states until 9 months following the end of the previous reporting period. Resultantly, shorter, timelier TA reports were developed for states within a 2-month period following submission of annual reports. In addition, state versions of the interactive dashboards previously mentioned were shared back with state partners when possible.

The key lesson learned around communication and dissemination is that timely, easily digestible feedback that moves beyond static reports and presentations to more innovative communication approaches such as interactive dashboards and data visualization resonated with stakeholders and improved uptake of evaluation results. Pairing quantitative results with narrative success stories was most beneficial in bringing the data to life.

Back to Top | Article Outline

Peer learning

The Web-based evaluation community of practice, although extremely valuable for many stakeholders, also revealed several lessons learned:

  • State IVP practitioners and their partners can benefit from an online platform that allows free sharing of information and resources on an ongoing basis with other peers across various states.
  • Active and regular use of an online platform does not often happen organically. Participants must regularly be prompted and encouraged to participate when possible by administrators and key community of practice members who are willing to start and moderate conversations with other users.
  • For a Web-based evaluation community of practice to be fully beneficial to members, there must be ongoing evaluation activities in which community of practice members are engaged. Ongoing evaluation activities require external environments (ie, funding, agency support, etc) that continuously encourage the evaluation of program and policy interventions. By immersing public health professionals in this evaluation-encouraging culture, members of community of practices will be motivated to regularly seek out information, resources, and trainings to enhance evaluation knowledge and skills.
Back to Top | Article Outline

Research to practice

At the CDC Injury Center, we have the advantage of housing both extramural research funding and state health department funding within the same division. This presents a major opportunity for promoting research to practice; however, actualizing this opportunity was more challenging than anticipated. States such as North Carolina where the state health department received Core VIPP funding and the University of North Carolina received CDC Injury Control Research Center (ICRC) funding were able to establish extremely fruitful and mutually beneficial relationships. In funded states without close proximity to an ICRC, establishing these relationships seemed more challenging. To address this issue, future iterations of Core funding established Regional Network Coordinating Organizations (RNCOs). This approach is intended to better facilitate state interaction with ICRCs within their region on various topic areas and across the nation on specialized topic areas.11

Back to Top | Article Outline

Conclusions

It has consistently been reported by our state partners that Core funding is a key factor in successfully competing for other federal and nongovernmental organization funding opportunities. Core states reported leveraging millions of dollars to support the goals and objectives of the Core VIPP. In addition to bringing in additional funding to the states, the Core program has also been instrumental in leveraging all types of resources including in-kind staff support, media advertising, and program supplies (eg, car seats, bike helmets). This type of funding is also vital in enabling states to quickly respond to emerging health threats. As opioid drug overdose quickly became a national issue, Core-funded states were well positioned to quickly respond. This was evident as 19 of 20 previously funded Core states successfully competed for CDC Prescription Drug Overdose funding through the Prevention for States and Data Driven Prevention Initiative funding opportunities.

Back to Top | Article Outline

Implications for Policy & Practice

  • When conducting surveys intended to measure organizational capacity, it is vital to collect contextual information to fully understand survey results. These combined approaches will provide a more comprehensive and valid picture of organizational capacity.
  • When intending to aggregate quantitative data across funded entities, predefined indicators need to be developed and required of all awardees to submit during annual reporting.
  • When communicating evaluation results, timely, easily digestible feedback is necessary for stakeholder uptake and use. As stakeholders become more technologically savvy, moving beyond static reports and presentations to more innovative communication approaches such as interactive dashboards is increasingly important for improved uptake of results. Pairing quantitative results with narrative success stories is also beneficial in bringing the data to life.
  • Strong, active leadership is vital to success of peer-to-peer communities of practice. Without strong leadership and engaging content, communities of practice will not thrive on membership input alone.

The Core VIPP funding ended in July 2016, with a new Core SVIPP funding program beginning in August 2016. With the development of a new funding proposal, the CDC used the lessons learned previously to inform improvements to the program over the next 5 years.

While these lessons learned resulted in direct changes to future program iterations, these concepts are also applicable to a broader range of behavioral health programs. These findings can help advance the broader evaluation field by informing the development of future, large-scale evaluations. Preidentifying measures to collect across all awardees that resonate with stakeholders can be vital in communicating the success of your program. Implementing a systems-level approach to addressing an issue can lead to greater success than any one program alone. This is especially important in a fiscally competitive environment with multiple public health priorities. In addition, while planning and capacity building are vital to success, it is important for funding and steering groups to move from planning to implementation. This allows for more directly connecting awardee activities to the implementation of, and health impact of, the systems they support. Coordinating organizations can also learn from the RNCO structure and how it was modified to be a more deliberate mechanism to support a community of practice with resources that reduce the individual burden for the members themselves. RNCOs demonstrate that allowing coordinating organizations to have either a regional focus of peer-to-peer collaboration or a national focus on specific topics and initiatives (eg, research-to-practice and practice-to-research) can enhance group utility and participation. Finally, FOA applications requiring the production of an evaluation plan and requisite revision to that plan during the first year postaward build a more deliberate process for continuous quality improvement from the very first planning stage. All of these changes combined can lead to greater success and understanding of the impact of a program.

Back to Top | Article Outline

References

1. Centers for Disease Control and Prevention, National Center for Injury Prevention and Control. Injury prevention & control. http://www.cdc.gov/injury. Published 2017. Accessed June 22, 2017.
2. Florence C, Simon T, Haegerich T, Luo F, Zhou C. Estimated lifetime medical and work-loss costs of fatal injuries—United States, 2013. MMWR Morb Mortal Wkly Rep. 2015;64(38):1074–1077. https://www.cdc.gov/mmwr/preview/mmwrhtml/mm6438a4.htm?s_cid=mm6438a4_w. Accessed June 22, 2017.
3. Frieden T. Six components necessary for effective public health program implementation. Am J Public Health. 2014;104(1):17–22.
4. Safe States Alliance. State of the States Reports. Atlanta, GA: Safe States Alliance. http://http://www.safestates.org/?page=SOTS. Accessed June 22, 2017.
5. Centers for Disease Control and Prevention, National Center for Injury Prevention and Control. Core State Violence and Injury Prevention Program. http://www.cdc.gov/injury/stateprograms. Published 2017. Accessed July 11, 2017.
6. Centers for Disease Control and Prevention, Program Performance and Evaluation Office. A framework for program evaluation. http://www.cdc.gov/eval/framework Published November 17, 2016. Accessed July 11, 2017.
7. Smith S, Wilkins N. Mind the gap: approaches to addressing the research-to-practice, practice-to-research chasm. J Public Health Manag Pract. 2018;24(Suppl 1):S6–S11.
8. Thigpen SA, Puddy RW, Singer HH, Hall DM. Moving knowledge into action: developing the rapid synthesis and translation process within the interactive systems framework. Am J Community Psychol. 2012;50(3/4):285–294.
9. Centers for Disease Control and Prevention, National Center for Injury Prevention and Control, and Safe States Alliance. Evaluation Institute outcome impact evaluation report. http://http://www.safestates.org/. Published March 2017. Accessed April 5, 2017.
10. Deokar A, Dellapenna A, Defiore-Hymer J, Laidler M, Millet SM, Myers L. State injury programs' response to the opioid epidemic: the role of CDC's core violence and injury prevention program. J Public Health Manag Pract. 2018;24(Suppl 1):S23–S31.
11. Smith S, Wilkins N, Marshall SW, et al The power of academicpractitioner collaboration to enhance science and practice integration: injury and violence prevention case studies. J Public Health Manag Pract. doi: 10.1097/PHH.0000000000000675.
Keywords:

government evaluation; mixed-methods evaluation; program evaluation; program implementation; public health; violence and injury prevention

Supplemental Digital Content

Back to Top | Article Outline
Copyright © 2018 Wolters Kluwer Health, Inc. All rights reserved.