Joly, Brenda M. PhD, MPH; Booth, Maureen MRP; Shaler, George MPH; Mittal, Prashant MSc, MS
There is growing interest in applying systematic approaches to quality management, so prevalent in other sectors, to public health. The introduction of a national voluntary accreditation program raises the stakes for state and local health departments (LHDs) to more closely examine operations and outcomes.1 Funding agencies are placing new demands on health departments to document how their efforts result in tangible improvements. And public health administrators, faced with declining resources and widening demand for their services, are having to monitor performance, set priorities, and assure that processes and practices are effective and lead to improved quality.
In 2005, The Robert Wood Johnson Foundation funded what was to become a 3-phase initiative known as the Multi-State Learning Collaborative (MLC) that ultimately focused on building capacity within state and LHDs to assess and improve the quality of public health services and outcomes. The history of the MLC and a description of the independent evaluation have been previously presented.2,3 The third phase of this initiative emphasizes the use of “mini-collaboratives” as the focal point for learning quality improvement (QI) skills and applying them to specific areas. Details about the mini-collaborative process and an evaluation of the mini-collaborative approach are reported elsewhere in this issue. In addition to the mini-collaborative intervention, the National Network of Public Health Institutes, manager of the MLC initiative, conducted face-to-face meetings, site visits, and open forums with states and their participating LHDs to bring together QI and subject-matter experts, share lessons and experience, and provide updates on national accreditation standards and processes. National webinars, ongoing conference calls, and an e-library further supplemented the resources available to state grantees throughout the project.
Given the emphasis on QI, MLC evaluators developed and tested a standardized instrument to assess the initiative's impact in changing QI practice, culture, capacity, and spread among health departments in the 16 MLC states. This instrument was labeled the QI Maturity Tool and is now in its third version (B. M. Joly, M. Booth, and G. Shaler, unpublished observation, 2011). The development of this tool was a first-time effort to identify and quantify factors believed to influence QI maturity in public health agencies and is understood to be the first validated tool of its kind for this setting. In its design, the evaluators relied heavily on research from other fields to substantiate the nature of how organizations adopt QI and the factors that are most likely to contribute to that adoption and diffusion within an organization. While the reliance on research unrelated to public health has obvious limitations, we believed it was instructive in understanding the intricacies and path of change within any organization. A summary of this literature and the specific factors known to influence QI culture, capacity and competency, as well as alignment and spread have been reported elsewhere.4
Our article provides evaluation results based on the QI Maturity Tool and its assessment of 4 QI domains: (1) organizational culture, (2) capacity and competency, (3) QI practice, and (4) alignment and spread. Through this analysis, we sought to determine whether there would be significant changes within these 4 QI domains at the end of the project among all LHDs within the 16 participating states, and among the LHDs that participated in a mini-collaborative. We also wanted to assess whether findings from other research showing that the greatest performance improvement occurs among the lowest quartile applied to the MLC initiative.5 The following 3 hypotheses were tested among those LHD respondents who completed the QI Maturity Tool in each of the 3 years of its administration:
Hypothesis 1: All LHDs will experience an increase in summative composite QI factor scores over the 3 years.
Hypothesis 2: LHDs participating in a mini-collaborative will have higher summative composite QI factor scores than their nonparticipating LHD counterparts.
Hypothesis 3: LHDs identified in the lowest quartile in 2009 will have a greater increase in their overall summative composite QI factor scores compared to all other quartiles combined.
This article describes the methods and data analyses used in the evaluation and concludes with a discussion of the findings and implications for public health research and practice.
This component of the evaluation was based on a longitudinal cohort design with the LHD serving as the unit of analysis. LHDs were identified on the basis of the definition and criteria used by the National Association of County and City Health Officials (NACCHO).6 As of 2009, there were 1161 LHDs in the 16 participating MLC states, representing approximately 41% of all LHDs across the country. The survey was sent to the administrator of each health department in the winter of 2009, 2010, and 2011. As an incentive to participate, the system generated an agency-specific QI report on the basis of areas of strength and opportunity.
To minimize the length of our evaluation survey, agency characteristics (eg, jurisdiction, annual expenses, staff size) were obtained from the 2008 Profile Survey administered by NACCHO. The 2008 Profile data set was merged with our data, and as mentioned previously, all LHDs that responded in each of the 3 years were included in the hypotheses testing.
Measures and Dimensions
The QI Maturity Tool was embedded as part of a larger MLC evaluation Web-based survey that included a series of skip patterns as well as items on national voluntary accreditation and the MLC initiative. The development and psychometric properties of the QI Maturity Tool have been reported previously.4 The 2009* version of the QI Maturity Tool (version 1.0) included 37 items used to assess the following 4 quality domains:
* QI Practice (n = 4): The number, type, and length of formal quality improvement efforts.
* QI Organizational culture (n = 8): The values and norms that pervade how the agency interacts with its staff and external stakeholders.
* QI Capacity and Competency (n = 10): The skills, functions, and approach used within an organization to assess and improve quality.
* QI Alignment and Spread (n = 15): The extent to which QI supports (and is supported by) the organization as well as the diffusion of QI within the agency.
With the exception of the practice domain, all items in the tool included a closed Likert-type scale response option where “1” corresponded to “strongly disagree” and “5” corresponded to “strongly agree.” A response of “I don't know” also was included given that the concept of QI remained relatively new for some health departments. However, few respondents selected this option and it was therefore dropped from the analysis.
Each QI domain (except practice given the nature of the questions) was subjected to a principal component analysis conducted in 2009 among all respondents based on 28 of the 33 questions. One item pertaining to external drivers of QI culture was eliminated to better reflect the desired measure of an organization's internal culture. In addition, 4 items in the alignment and spread domain provided limited data as a result of a skip pattern and were therefore dropped from the analysis. Labels were generated for each dimension and their content is described as follows.
QI Organizational Culture: A 2-factor solution was identified and the 2 factors accounted for 68% of the total variance explained. The first factor or dimension was labeled commitment and included 3 items assessing receptivity to QI, the impetus for improving quality, and the experiences of leadership based on common goals. The second dimension was labeled collaboration and focused on problem solving, staff involvement, data sharing, and shared accountability.
QI Capacity and Competency: Three factors or dimensions were detected and labeled as (1) skills, (2) methods, and (3) investment. These 3 factors accounted for 72% of the variance explained. Two items were included in the skills dimension and they focused on leadership and staff training in basic QI methods. The methods dimension included 6 items that assessed the ability of staff to monitor quality, identify root causes, and implement best practices. This dimension also included 2 items used to measure the availability of quality measures within the agency and current evaluation practices. Finally, the investment dimension focused on 2 items; one was related to the designation of a QI officer and a second question focused on existing processes for determining QI priorities.
QI Alignment and Spread: Three factors or dimensions were identified: (1) integration, (2) authority, and (3) value. These 3 factors accounted for 69% of the variance explained. The integration dimension focused on the alignment of QI responsibilities with job descriptions; the use of customer satisfaction data, the availability of accurate and timely data, and the incorporation of QI into daily practice; the amount of awareness about QI expertise; and the level of participation, adoption, and time available for QI efforts. The authority dimension included one item assessing staff authority to make needed change. Finally, the value subdomain included 2 items measuring perceptions about QI implementation and the importance of devoting time and resources to QI efforts.
Descriptive statistics were used to assess individual responses to all items. Bivariate analyses were conducted to determine differences in agency characteristics based on (1) survey participants and nonparticipants, (2) respondents who completed the QI Maturity Tool all 3 years (matched set) versus those who responded in only 1 or 2 years, (3) the matched set of agencies that participated in a mini-collaborative and those that did not, and (4) the matched set of agencies that did and did not fall into the lowest quartile for QI maturity on the basis of 2009 results.
The dimensions identified in 2009 were applied to survey results in 2010 and 2011. Summed factor composite scores were created by adding the values of the underlying items in each factor. These scores were then compared using ANOVA techniques to determine differences in 3 areas: year, mini-collaborative participation (yes, no), and assigned quartile (lowest vs all other). We used a generalized linear model to run factorial ANOVA to test each domain and the corresponding dimensions based on these 3 areas and their interactions. All results were considered significant at an α < .10. All analyses were conducted using SPSS version 17 (SPSS, Inc, Chicago, Illinois) and SAS version 9.2 (SAS, Inc, Cary, North Carolina).
Respondents and agency characteristics
General information about the agencies that responded to the survey in all 3 years is provided in Table 1. The overall survey response rates were 60% in 2009, 65% in 2010, and 63% in 2011. A total of 907 LHDs responded to the QI survey over the 3-year period and 404 agencies participated in all 3 years.
The results of the bivariate analyses exploring differences in agency characteristics among various samples are provided in Table 2. In general, there were no differences in population size, the number of full-time equivalents or annual expenditures among (1) survey respondents and nonrespondents, (2) respondents in the matched set and LHDs that only participated in 1 or 2 years, and (3) among the matched set of LHDs in the lowest quartile versus LHDs in the remaining 3 quartiles. Overall, survey respondents were more likely to be county-based. Agencies that participated in a mini-collaborative had significantly fewer full-time equivalents and lower annual expenditures when compared to their non–mini-collaborative participants within the matched sample.
A summary of the descriptive findings can be found in Table 3. Overall, LHDs reported high levels of agreement on the culture items. Across all 3 years, mean scores were above 4.0 for all 7 items included in the factor analysis. Mean scores in this same period ranged from 2.3 to 3.6 in the capacity and competency domain and from 1.1 to 4.3 in the alignment and spread domain.
Hypothesis testing among matched sample (n = 404)
QI Practice: The results revealed a statistically significant increase during the 3-year period based on the number of LHDs who reported ever implementing a formal process to improve the performance of a service, program, process, or outcome (χ2 = 22.75, df = 4, P < .0001). The analyses also indicated that LHDs participating in a mini-collaborative over the 3-year period were more likely to report implementing a formal QI process when compared to LHDs that did not participate in a mini-collaborative (χ2 = 9.59, df = 2, P < .0083). A significant difference was also found between LHDs in the lowest quartile and health departments in the remaining quartiles (χ2 = 26.7301, df = 2, P < .0001), indicating that those in the bottom quartile were less likely to implement QI.
QI Organizational Culture: As seen in Table 4, there were no significant changes in this domain or dimension over the 3 years or among mini-collaborative participants. However, there was a significant change in all areas among agencies in the lowest quartile when compared to their counterparts in the other 3 quartiles. In addition, the results suggested an interaction effect between quartile status and year.
QI Capacity and Competency: The results indicated a significant change in this domain overall and among the 3 dimensions for both year and quartile status. Post hoc analyses revealed significant differences between 2009 and 2011 across all factors. In addition, 2 of the 3 dimensions (skills and methods) indicated significant differences based on participation in a mini-collaborative. Scores in the skills dimension were higher among mini-collaborative participants and scores for the methods dimension were lower for this group. Interaction effects were identified at the domain level and across all dimensions between year and quartile status. An interaction effect was also noted in the skills and investment dimensions between quartile status and participation in a mini-collaborative.
QI Alignment and Spread: The findings suggest a significant change in this domain and among the 3 dimensions for both year and quartile status. Once again, post hoc analyses revealed that significant increases in factor scores occurred between 2009 and 2011. The results also revealed differences in the value dimension based on participation in a mini-collaborative. Additional post hoc analyses indicated that mini-collaborative participants had higher scores in this dimension. Interaction effects were identified at the domain level and across all dimensions between year and quartile status. An interaction effect was also noted in the overall domain and in the value dimension between quartile status and participation in a mini-collaborative.
The QI maturity tool was designed to gauge whether it could assess changes in 4 quality improvement domains: (1) practice, (2) organizational culture, (3) capacity and competency, and (4) alignment and spread. Given that QI was a significant driver of the MLC initiative, we sought to determine whether changes occurred among LHDs in the 16 participating states during the initiative's 3 years. Furthermore, we explored whether participation in a mini-collaborative enhanced the chances of further QI development and whether low performers could make dramatic changes during a 3-year period.
Overall, the results revealed significant changes in 3 major areas: (1) practice, (2) capacity and competency and (3) alignment and spread. Improvements in these domains and across all of their accompanying dimension were significant between 2009 and 2011.
QI Practice: The results indicated an increase in the number of LHDs that implemented formal QI efforts over the course of the grant. The results also underscored the role of mini-collaboratives in the adoption and acceleration of QI tools and approaches at the local level.
QI Capacity and Competency: Our findings suggest that LHDs enhanced their QI capacity and competency in terms of skills, methods, and investment. Given the nature of the MLC initiative, the focus on QI, and external drivers such as national voluntary accreditation, LHDs had opportunity and incentives to develop their QI skills and refine their agencies processes. The role of a mini-collaborative was clearly important in building skills. However, one interesting and noteworthy finding suggested higher scores in the methods dimension among non–mini-collaborative participants. We hypothesized several reasons for this finding. First, LHDs participating in a mini-collaborative were smaller and had lower annual expenditures, suggesting fewer staff and resources available for applying QI. Second, the initiative took place during an economic downturn and LHDs nationwide lost a significant amount of jobs and funding.7 Mini-collaborative participants may have especially been hard hit given their smaller staff and budgets. A recent report indicated that while smaller LHDs were less likely than larger LHDs to experience staff reductions, a larger percent of their workforce was impacted when they did.7 Third, despite an increase in skills among agencies involved in a mini-collaborative, these LHDs may have had limited means or opportunity to apply the newfound QI methods.
QI Alignment and Spread: Results from the QI Maturity Tool detected significant changes that suggest LHDs were able to better integrate QI into programs and services, allocate staff time for QI, and engage additional staff in QI efforts over the course of the grant. The findings also provide preliminary data about a mini-collaborative's role in shaping the perceptions of QI.
QI Organizational Culture: In terms of the organizational culture domain and accompanying dimensions, the findings revealed no significant changes during the project. The scores for these factors remained stable and high throughout the 3 years. We suspect that additional work is needed to adequately capture and measure the nuances of a culture of quality in a self-report format.
Several limitations of the study should be acknowledged. First, the survey upon which findings are based was administered to the health officer or his/her designee. Thus, they represent the opinion of a single individual and may not reflect the true status of quality improvement efforts within a LHD. Second, the final year of survey administration occurred at a time when some mini-collaboratives were still taking place. Therefore, the full effect of participation may not be captured in these data. Finally, all data were self-reported with no independent validation of factual information. Preliminary findings from our third round of case studies conducted as part of the evaluation raised questions about the congruity of survey responses, self-assessments made during interviews, and evaluator observations.8 Finally, our analysis pertains only to the 16 states participating in the MLC initiative and the results may not be generalizable. We have no way of knowing whether observed changes in QI over the course of the 3-year initiative also occurred in non-MLC states.
Implications and next steps
The QI Maturity Tool provides a rich source of information on the scope and pace of change in LHDs over a critical 3-year period during which time national and state efforts were under way to build QI awareness and capacity. The results have important implications for how we measure and track QI adoption, how we support LHDs in their QI efforts, and how we assess the long-term impact of QI on public health services and outcomes. More work is needed to better understand our findings in relationship to other LHD characteristics (eg, urban vs rural, centralized vs decentralized) and based on the financing and scope of services provided. In addition, research on the “early adopters” and “laggards” may prove helpful in focusing on specific approaches designed to accelerate the adoption of QI.
Our research gives us confidence that the QI Maturity Tool is a useful instrument to help assess the adoption and spread of QI in public health. But as noted by other public health researchers,6,7 further refinement is needed to better standardize language and definitions of the component parts of a QI system that are being measured through the tool. For example, our analysis and case studies confirm that the term “QI Project” is not well understood and is often used when describing less deliberate and systematic activities to improve performance. We know that the QI Maturity Tool assesses both the factors leading to QI adoption (eg, agency culture) as well as specific activities denoting QI application and spread throughout an agency. While both are essential to understanding QI maturity, the tool needs to be expanded if it is to distinguish among the various stages of QI adoption and spread. We believe it is important to rethink the administration of the survey to only one person within an agency. The obvious simplicity of administration and scoring must be weighed against the value of obtaining a more representative composite view of an agency's QI culture, capacity, and spread. Finally, preliminary findings from our case studies suggest that survey respondents are inclined to inflate their assessment of agency culture and QI activity. Further work is needed to minimize this tendency.
Findings indicate that a mini-collaborative approach shows promise in building QI capacity and competency and raising the value of QI within an agency. Mini-collaboratives hold particular promise for lower-performing agencies, which showed remarkable progress in all domains over a 3-year period. Except as noted, mini-collaboratives generally did not change QI culture or the alignment and spread of QI throughout an agency. Future initiatives should target LHD leaders and administrators who shape the environment and make it conducive to the adoption and use of QI.
There is much to learn from research in other fields on the stages of QI development and to test that experience within the public health community. Initial work in defining the QI continuum in public health9 should be further developed. As Beitsch et al10 maintain, the field needs reliable measures to assess the adoption of QI methods in local and state health departments. We believe that additional refinement of the QI Maturity Tool, including additional psychometric analyses of the latest version (3.0), will provide a useful basis for both plotting the progress of public health agencies and documenting the indicators for achieving progressive developmental stages of maturity.
2. Gillen SM, McKeever J, Edwards KF, Thielen L. Promoting quality improvement and achieving measurable change: the lead states initiative. J Public Health Manag Pract. 2010;16(1):55–60.
3. Joly BM, Shaler G, Booth M, Conway A, Mittal P. Evaluating the multi-state learning collaborative. J Public Health Manag Pract. 2010;16(1):61–66.
4. Joly BM, Booth M, Mittal P, Shaler G. Measuring quality improvement in public health: the development and psychometric testing of a QI maturity tool. Eval Health Prof. In press.
5. Integrated Healthcare Association. The California Pay For Performance Program. The Second Chapter Measurement Years 2006–2009. Innovation Through Collaboration. White Paper. Oakland, CA: Integrated Healthcare Association; 2009.
8. Joly BM, Booth M, Shaler G, Conway A. MLC Case Studies, Preliminary Findings From Year 3: The Rhetoric Versus Reality of Quality Improvement in Local Public Health Agencies. Portland, ME: Muskie School of Public Service, University of Southern Maine; 2011. Evaluation Report.
9. Riley WJ, Moran JW, Corso LC, Beitsch LM, Bialek R, Cofsky A. Defining quality improvement in public health. J Public Health Manag Pract. 2010;16(1):5–7.
10. Beitsch LM, Leep C, Shah G, Brooks RG, Pestronk RM. Quality improvement in local health departments: results of the NACCHO 2008 survey. J Public Health Manag Pract. 2010;16(1):49–54.
*On the basis of additional characteristics known to influence QI, 9 items were added to the existing domains in 2010 and 5 new items were incorporated in 2011. However, given our interest in comparing results over the course of the project, all analyses were based on the original 37-item instrument. Cited Here...
local health departments; mini-collaboratives; multi-state learning collaborative; quality improvement
© 2012 Lippincott Williams & Wilkins, Inc.