To understand how individual GRHOP projects influenced the joint medium- and long-term outcomes, project-specific “zoom-in models” were developed. Each zoom-in model included the same mid- and long-term outcomes from the enterprise logic model, though short-term outcomes were project-specific and more detailed than those included in the enterprise model. Project contributions to mid- and long-term outcomes were highlighted in each model to illustrate that project's pathway from activities to long-term outcomes. All zoom-in models underwent numerous reviews, accounting for available data. The MBHCP zoom-in was the most challenging to develop as it reflected the work of independent partners in the 4 states, which had substantial variation in activities, discrete budgets, and leaders representing different mental and behavioral health disciplines.31,32 Thus, the MBHCP initiative operated as 4 state-specific projects nested within a larger MBHCP enterprise. Each of the 4 GRHOP projects also developed output and process measures for their appropriate short-, mid-, and long-term outcomes. Once the zoom-in models were created, the evaluation subcommittee compiled measures for each long-term outcome based on data collection across projects. The collective creation of a shared logic model required extensive collaboration among partners and, as a result, partners gained full perspective on the varying project contexts and objectives as well as the overlapping project sites and stakeholders. This process resulted in a deeper sense of shared mission among project leaders and fostered additional collective activities to enhance health care capacity across GRHOP sites.31
Individual data collection: Examples from GRHOP's enterprise evaluation
Table 2 highlights examples of several projects' individual evaluation efforts, drawing from PCCP, EHCLP, and MBHCP in Louisiana, Mississippi, and Alabama. The data presented in this table do not capture the full scope of data collected by each of the projects. Rather, these examples illustrate a small portion of the qualitative and quantitative data sources that contribute to 2 of the midterm outcomes and the same long-term outcome.
Since data collection for each project was completed independently, there was the potential for repetition in the data collected or having separate data collection efforts underway at the same site(s). In addition, distinct approaches to data collection could impede GRHOP's ability to compare data across projects. However, independent data collection provided benefits outweighing these challenges. Independent data collection allowed for enhanced intraproject coordination of resources and scheduling, which was particularly important considering the complexity and geographic dispersal of partners and sites. Individual projects were most familiar with the appropriate stakeholders to involve in evaluation efforts and had expertise for selecting or creating data collection tools. Furthermore, individual projects measured outcomes according to each project's zoom-in model, thus ensuring that data collection was still informed by a collaborative effort to measure GRHOP's impact.
Collective analysis: Utilizing enterprise evaluation data to measure GRHOP's collective impact
After individual data collection is completed, the next step in the enterprise evaluation process is to examine how data collected by individual projects contribute to collective impact. While fully examining the collective impact of GRHOP is outside the scope of this article, we can provide some examples of GRHOP's progress toward achieving collective objectives. As highlighted in Table 2, multiple projects contributed to the same midterm outcomes; PCCP and EHCLP data both relate to a stronger health care system, and the data from MBHCP partners in Louisiana, Alabama, and Mississippi relate to how MBH services and/or referral systems were embedded into primary care settings. By contributing to common midterm outcomes, these individual evaluation efforts provide a multidimensional understanding of GRHOP's impact.
Individual data collection efforts undertaken by PCCP and EHCLP were complementary. The EHCLP's findings that clients have better health care access today than 5 years ago are corroborated by PCCP's data regarding the increase in medical visits (and thus services provided) and unique patients seen. Whereas PCCP's data capture the higher number of visits and patients, EHCLP's data pertain to perceptions of health care providers regarding improved health care access and the reasons behind improved access. In this sense, EHCLP's data provide a nuanced understanding of how the health care system has been strengthened (eg, increased availability of insurance, increased number and capacity of federally qualified health centers in the area). Similarly, PCCP's data describe changes occurring within the health care system at a larger scale than EHCLP's data, which were collected from a subset of the health care workforce (community health workers and community health worker supervisors). Accordingly, the type of data collected by PCCP complements EHCLP's data by providing insights into the overall health care landscape. Finally, PCCP and EHCLP evaluated different dimensions of a strengthened health care system. While PCCP's data provided quantitative evidence of a strengthened health care system in terms of health care utilization, EHCLP's data provide qualitative evidence of the same outcome in terms of health care access. By examining the health care system from both provider (ie, EHCLP) and clinic operator (ie, PCCP) perspectives, GRHOP partners gain a holistic and robust view of the health care system and the ways it changed. In accordance with collective impact approaches to evaluation,20 our collective analysis does not assert a causal linkage but rather shows potential contributions of these individual projects to strengthened health care systems.
Similarly, the data collected by MBHCP highlight indicators of embedded and integrated mental health care in primary care clinics across states and how they were measured in project-specific ways. All MBHCP projects collected data about the number of clinics offering MBH services and the number of integrated MBH providers placed within those clinics before and after project implementation. As illustrated in Table 2, MBHCP in Alabama and Mississippi also measured the number of patients referred to MBH services through a warm handoff from the primary care provider to the MBH provider while the patient was in the clinic. This referral pathway is a pivotal process in integrated care since it reduces no-show rates, which enhances sustainable billing; improves access to specialty care; may combat mental health stigma; and may reflect the extent to which MBH providers are seen as important contributors to patient wellness and as peers on the health care team.33 The MBHCP data collected in Louisiana highlight the organizational components that contribute to fully embedded services and involved an assessment of changes in perceptions among clinic administrators, MBH providers, and coordinators regarding patient-centered care, population-based care, measurement-based targeted treatment, evidence-based care, and accountability. These organizational aspects are aligned with national models for fully integrated care. While the MBHCP projects utilized individualized data collection systems to evaluate project impact, these examples, similar to those from PCCP and EHCLP, illustrate how the enterprise evaluation model has the potential to elucidate not only individual project outcomes but also amplified collective impact. All tell the story of a stronger health care system and increases in embedded mental and behavior health care in clinics primarily serving vulnerable patients.
Further reflection on collective impact will be the focus of a program-wide workshop to be held when all project activities wrap up. The workshop will facilitate collective analysis of individual data collection, focusing on key findings from individual projects and assessing the extent to which GRHOP resulted in the intended collective midterm outcomes. While it is unlikely that long-term outcomes will be fully achieved in GRHOP's time frame of 6 years, the workshop will provide an opportunity to assess progress toward accomplishing long-term outcomes, as well as the potential barriers or enablers influencing achievement of long-term outcomes after the program ends.
From Collective Creation to Collective Analysis: Insights for Practitioners
Enterprise evaluation is a practical approach to measure collective outcomes across multiple projects. Through the 3 stages of collective creation, individual collection, and collective analysis, enterprise evaluation provides an opportunity to assess the impact of jointly funded projects in a holistic and multiscalar manner that draws on both collective assessment and individual programmatic approaches to evaluation. In GRHOP, the collective creation of the enterprise logic model facilitated the partners' focus on the collective outcomes and clearly articulated the ways each project feeds into the larger whole. Individual data collection maximized efficiency by allowing projects to collect data independently. Since individual data collection was informed by the enterprise logic model, this approach also ensured that the data collected related to collective outcomes. Individual data collection was also essential, given the broad scope of project objectives and diversity within and across states and clinic systems. Early analysis of individual project data indicates areas of progress toward collective outcomes. GRHOP's experience with enterprise evaluation resulted in several lessons learned and insights for other projects considering a similar approach. One is the need to streamline evaluation efforts with existing data systems. To limit the burden on clinic and community partners, some projects aligned data collection efforts with partners' existing data systems. For example, PCCP made significant changes to its data collection approach to reduce the burden on participating agencies, shifting away from a unique survey and instead relying on the existing Uniform Data System (UDS) measures that the Health Resources and Services Administration already required clinic systems to collect and report. While this approach was favorable for institutional partnerships, a trade-off resulted since PCCP was no longer able to access data disaggregated by clinic site because UDS measures are reported at the operator level. Similarly, MBHCP evaluation data collected by project staff varied substantially from data extracted from electronic health records, given differences in technological capacity at clinics, resulting in inconsistencies in the data available across sites.
Like most jointly funded public health projects, GRHOP's collective evaluation efforts were not mandated and central coordination of evaluation was not funded. The formation of an evaluation subcommittee to oversee evaluation efforts, however, provided accountability and ensured timeliness in individual data collection. Subcommittee meetings allowed partners to consistently share information about evaluation strategies and seek ways of maximizing programming to meet clinic and community needs. However, we recommend that future groups mandate and fund enterprise evaluation efforts so that team members have the resources to take a more proactive role in the coordination of individual data collection, including the development of a centralized action plan for each project's data collection, evaluation deadlines and milestones, and the identification of a team member with evaluation expertise who would be accountable for each evaluation component. Funding specifically for evaluation staffing and coordination should also be included in project budgets. Similarly, it would have been beneficial to have even more dedicated time for data collectors to interact and more frequent meetings of the evaluation subcommittee. These interactions could have allowed team members to provide more frequent updates, resolve data collection challenges, and collectively reflect on progress and outcomes throughout implementation. We also recommend creating the enterprise logic model earlier in the process. In GRHOP's case, the model was not finalized until after project implementation and project-specific evaluation efforts began due to the lack of mandate for evaluation and the urgency for projects to deliver services as quickly as possible. However, the built-in flexibility of the enterprise evaluation approach also allows for its potential application to interrelated projects that have already carried out independent evaluations. For example, projects with similar goals, dedicated team members, and available resources for evaluation could retrospectively develop a collective logic model and draw from their existing evaluation data to engage in collective analysis.
GRHOP's experience with the enterprise evaluation approach represents a fundamental shift in evaluation and academic collaboration as it actively encouraged project partners to deepen their awareness of and collaboration with other projects to maximize impact. In this regard, shared values and strong leadership were key enabling factors for the program's success with the enterprise evaluation approach. Since evaluation was not part of the program's official mandate and partners were responsible for funding their own evaluation efforts, the projects were limited in the amount of time and resources available for evaluation. It was thus critical that partners personally prioritized assessing collective impact and conducting rigorous evaluation, particularly given the complexity of the program's design and the challenges it posed for collective evaluation. As previously mentioned, GRHOP projects involved broad objectives, disciplines, and target audiences. There was a large number and variety of institutional partners and each project had distinct pacing and timing of implementation. The cumulative impact of projects also differed by site and state since projects overlapped at some sites, geographic areas, and communities, but not in all cases. Funding levels also varied by state and project. Considering this complexity, the collective creation phase of the enterprise evaluation approach significantly benefited from facilitation by an external consultant who guided the development of the enterprise logic model. In addition, there were several “champions” for the enterprise evaluation approach among partners who spearheaded the framework's implementation. Several project leaders also had considerable assessment and evaluation expertise, as well as strong preexisting ties to communities. Leveraging existing expertise was highly valuable in facilitating the process.
The enterprise evaluation approach addresses a long-standing policy gap in public health by incorporating cross-sector coordination and collective impact measures into evaluation. The built-in flexibility of the framework allowed GRHOP's individual projects to tailor data collection to context, objectives, and indicators of interest, while also examining the bigger picture of how projects interact and influence collective outcomes. Implementation of the framework takes intentional effort and collaboration, but ultimately it promotes the type of evaluation needed to assess GRHOP's sustainable long-term impact across the Gulf Coast. For funding agencies, enterprise evaluation presents an important opportunity to move evaluation beyond descriptive measures aimed at accountability and gain meaningful insights into the collective impact of interrelated public health initiatives, while still considering individual program impacts. Broader implementation of the framework, however, would require funding agencies to shift their orientation to evaluation. This implies changing the way requests for proposals are written and ultimately altering the very way public health research and capacity building are conceptualized and implemented.
Implications for Policy & Practice
- Enterprise evaluation presents an opportunity for public health agencies to shift their evaluation approach to gain better insight into the collective impact of funded projects.
- Enterprise evaluation seeks to maximize collective impact by fostering awareness and collaboration among interrelated projects.
- The enterprise evaluation framework counters standard public health policies and practices by deliberately incorporating cross-project coordination and collective impact measures.
- Measuring collective impact across projects requires intentional effort and strong partnerships among practitioners and institutions.
1. Levinson DR. Nursing Home Emergency Preparedness and Response During Recent Hurricanes. Washington, DC: Department of Health & Human Services; 2006.
2. Lane K, Charles-Guzman K, Wheeler K, Abid Z, Graber N, Matte T. Health effects of coastal storms and flooding in urban areas: a review and vulnerability assessment. J Environ Public Health. 2013;2013:13.
3. Kessler RC, Galea S, Gruber MJ, Sampson NA, Ursano RJ, Wessely S. Trends in mental illness and suicidality after Hurricane Katrina. Mol Psychiatry. 2008;13:374.
4. Kim SC, Plumb R, Gredig QN, Rankin L, Taylor B. Medium–term post–Katrina health sequelae among New Orleans residents: predictors of poor mental and physical health. J Clin Nurs. 2008;17(17):2335–2342.
5. Jhung MA, Shehab N, Rohr-Allegrini C, et al Chronic disease and disasters: Medication demands of Hurricane Katrina evacuees. Am J Prev Med. 33(3):207–210.
6. Rudowitz R, Rowland D, Shartzer A. Health care in New Orleans before and after Hurricane Katrina. Health Aff. 2006;25(5):w393–w406.
7. Osofsky HJ, Osofsky JD, Kronenberg M, Brennan A, Hansel TC. Posttraumatic stress symptoms in children after Hurricane Katrina: predicting the need for mental health services. Am J Orthopsychiatry. 2009;79(2):212–220.
8. Juneja P. Economics of Disaster: New Orleans and Katrina. Atlanta, GA: Federal Reserve Bank of Atlanta Public Affairs Department; 2015.
9. Arabella Advisors. In: John D, Catherine T, eds. Evaluating Post-Hurricane Katrina Investments: Strengthening Decision-Making and Organizational Impact. Chicago, IL: MacArthur Foundation; 2012.
10. Seidman KF. Coming Home to New Orleans: Neighborhood Rebuilding After Katrina. New York, NY: Oxford University Press; 2013.
11. Goldstein BD, Osofsky HJ, Lichtveld MY. The Gulf oil spill. N Engl J Med. 2011;364(14):1334–1348.
12. Lichtveld M, Sherchan S, Gam KB, et al The Deepwater Horizon oil spill through the lens of human health and the ecosystem. Curr Environ Health Rep. 2016;3(4):370–378.
14. Resources and Ecosystems Sustainability, Tourist Opportunities, and Revived Economies of the Gulf Coast States Act (RESTORE Act). In 33 USC 1321 (t) Pub L No. 112-141: Treasury of the United States; 2012.
16. Lichtveld M. Disasters through the lens of disparities: elevate community resilience as an essential public health service. Am J Public Health. 2017;108(1):28–30.
17. Beitsch LM, Goldstein BD, Buckner AV. Instigating public health set-asides: Deepwater Horizon as a model. J Public Health Manag Pract. 2017;23:S5–S7.
18. Juarez P, Matthews-Juarez P, Hood D, et al The public health exposome: a population-based, exposure science approach to health disparities research. Int J Environ Res Public Health. 2014;11(12):12866.
19. Flood J, Minkler M, Hennessey Lavery S, Estrada J, Falbe J. The collective impact
model and its potential for health promotion: overview and case study of a healthy retail initiative in San Francisco. Health Educ Behav. 2015;42(5):654–668.
20. Kania J, Kramer M. Collective Impact
. Palo Alto, CA: Stanford Social Innovation Review; 2011.
21. National Center for Environmental Innovation. Guidelines for Evaluation
an EPA Partnership Program (Interim). Washington, DC: Environmental Protection Agency; 2009.
22. Program Performance and Evaluation
Office. Improving the Use of Program Evaluation
for Maximum Health Impact: Guidelines and Recommendations. Atlanta, GA: Centers for Disease Control and Prevention; 2012.
23. Centers for Disease Control and Prevention. Framework for program evaluation
in public health. Morbidity and Mortality Weekly Report series 48 (No. RR-11). https://www.cdc.gov/mmwr/PDF/rr/rr4811.pdf
. Published 1999. Accessed March 14, 2018.
24. National Institute of Environmental Health Sciences. Partnerships for Environmental Public Health Evaluation
Metrics Manual. NIH Publication No. 12-7825. Washington, DC: US Department of Health and Human Services; 2012.
26. The National Academies of Sciences, Engineering, and Medicine. Principles and practices for federal program evaluation
. In: Proceedings of a Workshop—in Brief; 2017; Washington, DC.
27. Hanleybrown F, Kania J, Kramer M. Channeling Change: Making Collective Impact
Work. Palo Alto, CA: Stanford Social Innovation Review; 2012.
28. Turner S, Merchant K, Kania J, Martin E. Understanding the Value of Backbone Organizations in Collective Impact
. Palo Alto, CA: Stanford Social Innovation Review; 2012.
29. Patton MQ. Developmental Evaluation
: Applying Complexity Concepts to Enhance Innovation and Use. New York, NY: Guilford Press; 2010.
30. Buckner AV, Goldstein BD, Beitsch LM. Building resilience among disadvantaged communities: Gulf Region Health Outreach Program overview. J Public Health Manag Pract. 2017;23:S1–S4.
31. Langhinrichsen-Rohling J, Osofsky H, Osofsky J, Rohrer G, Rehner T. Four states, four projects, one mission: collectively enhancing mental and behavioral health capacity throughout the Gulf Coast. J Public Health Manag Pract. 2017;23:S11–S18.
32. Hansel TC, Osofsky HJ, Langhinrichsen-Rohling J, et al Gulf Coast resilience coalition: an evolved collaborative built on shared disaster experiences, response, and future preparedness. Disaster Med Public Health Prep. 2015;9(6):657–665.
33. Langhinrichsen-Rohling J, Wornell C, Johns K, Selwyn C, Friend J. The nuts and bolts of developing integrated healthcare in under-resourced primary care settings: challenges and lessons learned. In: Craig WS, ed. Integrated Psychological Services in Primary Care. Hauppauge, NY: Nova Science Publishers; 2015:67–87.
Keywords:Copyright © 2019 Wolters Kluwer Health, Inc. All rights reserved.
collective impact; evaluation; integrated care