Honoré, Peggy A. DHA; Schlechte, Tricia MPH, BSN
Policies such as the Government Performance and Responsibility Act were implemented to advance concepts for linking funding to performance. However, funding is typically provided for governmental services based on organization mission,1 and not necessarily on past financial or operational performance. Allocations for preparedness provide an excellent illustration. These funding flows are difficult to trace, and performance metrics have not been well defined.2
Efforts to measure system performance when fulfilling the public health mission have advanced through the National Public Health Performance Standards Program (NPHPSP). Performance standards are used to determine system capacity to deliver the 10 Essential Public Health Services (essential services). These services describe the functions performed in public health when satisfying its mission of ensuring conditions in which the population can be healthy.3
Finance has also been noted as a major indicator when measuring system performance.4 Examinations to simultaneously measure performance in each of the 10 essential services and the resources consumed to achieve desired results are powerful analytical tools. Such analysis can be beneficial for (1) describing expenditure patterns, (2) examining funding alignments with organizational mission, and (3) prioritizing and realigning funding allocations.
In 2004, the Missouri Department of Health and Senior Services (DHSS) designed a pilot study to combine financial and performance analysis. A goal of the study was to measure the amount of resources consumed in each of the 10 essential service categories to achieve performance levels as measured by the NPHPSP Assessment Instrument. The purpose of this study was primarily to gain a firm understanding of expenditure patterns in the agency and to increase knowledge on the utilization of public health funding. Information from such a study could be useful for prioritizing funding allocations to areas where performance levels are inadequate. Such quantitative analysis is a basic component to quality improvement efforts. Additional motivators of the study were to
* replicate previous state public health agency expenditure pilot studies;
* test the ability to implement a previous Health People 2010 financial data-collection goal;
* quantify the level of administrative expenditures as a percentage of total expenses; and
* address Institute of Medicine recommendations to measure investments made and needed in the 10 essential services.3
Previous Pilot Studies
The initial pilot study to categorize state expenditures by the 10 essential services was undertaken by the Public Health Foundation (PHF) in 1994.5 The study involved nine state health departments and was designed to test a methodology for estimating state investments in each of the 10 essential services and administration. The study concluded that state agency expenditures could be defined and measured under this framework. A second study followed in 1997 as a joint effort by the National Association of County and City Health Officials (NACCHO), National Association of Local Boards of Health (NALBOH), and the PHF to test the feasibility of using the same methodology for measuring local public health agency expenditures by the 10 essential services.6 Three case studies were developed of local public health agencies located in Onondaga County, New York, Northeast Tri-County, Washington, and Columbus, Ohio. This study also concluded that the process was feasible and would not require significant efforts of time and resources. In 2000, a report titled Statewide Public Health Expenditures: A Pilot Study in Maryland was issued by ASTHO, NACCHO, NALBOH, and PHF.7 The report documented a third collaborative effort, conducted statewide in Maryland, to determine the reliability and validity of categorizing statewide (state and local health agency) expenditure data by the 10 essential services and to refine the data-collection tool used in the two previous studies. A final conclusion from that study was that the framework of the 10 essential services could be successfully adapted for financial data collection statewide.
While the three previous studies did report favorable observations regarding the feasibility of collecting financial data under this framework, they also provided a host of limitations. The 1994 nine-state study cited concerns with data variability and difficulties with data collection due to differences in agency organizational structures. Observations in the 1997 project included concerns with data comparability and reliability attributable to the data-collection tool. A common problem was the tendency to allocate program expenditures across either too many or too few of the 10 essential service categories. The 2000 Maryland report also noted statewide variations with quantifying administrative expenditures.
The DHSS pilot project was designed to categorize only fiscal year 2004 actual expenditures of the state agency as documented in its accounting information system. Local public health agencies in the state were not included in the study. State government restructuring in 2001 placed the Division of Aging into the former Department of Health. As a result, the name of the agency was changed to the DHSS. Senior services and regulatory functions were located in a separate division from other public health functions within the agency (Figure 1). Major programs in each of the agencies operating areas at the time of this study in 2004 are shown in Figure 1.
Expenditures related to some functions were not considered as customary public health activities and were eliminated from this study. Both Senior Prescription Drug and Health Facilities Review programs were excluded. All other programs, including senior services functions, were deemed to be relevant to the mission of population-based public health and were included in the study.
In 2004, the DHSS measured state system-level performance for delivering the 10 essential services by using the NPHPSP State Performance Assessment Instrument (Performance Assessment Instrument). The Performance Assessment Instrument was developed at the Centers for Disease Control and Prevention in collaboration with national partners as a uniform method to measure performance within the public health system. The main feature of the Performance Assessment Instrument is that it is a self-assessment survey to be completed with participation from all contributors of state-level public health services. The Performance Assessment Instrument is divided into 10 sections aligned with each of the essential services. Performance scores, expressed as a percentage of an ideal score of 100 percent, are assigned on the basis of the system's ability to deliver services consistent with performance standards established for each of the 10 categories.
The performance assessment study was undertaken as a separate activity from this project. Seven assessment meetings were held between August and September 2004. Statewide organizations that contribute to the delivery of public health services8 were invited to participate as stipulated by the Assessment Instrument guidelines. Data from this study were used in this research to compare to expenditure percentages in each of the 10 essential service categories.
The DHSS used an agency-wide letter to introduce and disseminate information about the study to measure expenditures by the specified categories. In the letter, the agency communicated a desire to document and analyze its investments in public health services. Also included were critical guidelines on project coordination and methods for allocating and collecting the financial data.
To guide allocation decisions, the Maryland study included a data-collection instrument that had been refined on the basis of the 1994 and 1997 expenditure studies.7 This data-collection instrument also contained a Crosswalk of Program Activities (Crosswalk) for each of the 10 essential service categories. The Crosswalk was designed to aid staff in identifying and estimating the percentage of time spent on program activities within each of the 10 categories. The DHSS utilized the Crosswalk but did not use all of the suggested allocation guidelines as provided in the data-collection instrument. Some of these differences included (1) a decision not to categorize expenditure allocations into funding sources (ie, state, federal, local); (2) expenditures for fiscal and the administrative employees (ie, fiscal liaisons, fiscal clerks) within each program were categorized under the administration category; and (3) the expenditure allocations were based solely on actual 2004 fiscal year expenditures and did not include any budget data that were permissible in the Maryland study.
Guidelines in the agency-wide letter to program managers included instructions to
* identify the percentage of time each employee worked on specific essential services as a part of their primary job functions;
* categorize staff with primary fiscal responsibilities under administration;
* categorize division directors under essential service 5 (develop policies and plans);
* allocate support staff using the same percentages identified for specific programs;
* limit the minimum percentage for any essential service function to 10 percent;
* allocate expenditures incurred under program contracts based on the specific essential service activities performed under the contracts;
* allocate expense and equipment expenditures to the essential services based on the percentages identified for the individual organizational units; and
* allocate administrative offices (ie, division of administration, legal counsel, agency director's office) to the administration category.
The percentages identified through this process were applied to actual fiscal year 2004 expenditures (taken from the accounting information system) for each DHSS organizational unit included in the study. Initially, study investigators contemplated using the computerized Time Accounting System to calculate the category allocations for every employee. It was hoped that the percentage of time for each category could have been identified for every position in the agency and coded into each employee's time accounting record in the Time Accounting System. This initial design allowed for modifications to be made as position functions changed and this would have facilitated the automatic allocation of personnel expenditures to essential service and administration categories during each pay cycle. It would also have eliminated the necessity for any manual allocations of payroll expenditures. Following several months of work on prototypes with information system programmers, it was determined that this process was not feasible given complexities related to accounting system labor distribution coding issues (eg, 10 lines of coding for some employees distributed among the appropriate categories). Subsequently, the DHSS deemed that process to be burdensome and not very feasible. Ultimately a decision was made to manually allocate the expenditures into the appropriate categories at the end of the fiscal year.
Fiscal year 2004 expenditures allocated to the 10 essential services in this study totaled $367,375,300. DHSS funding for 2004 represented 72 percent from federal sources, 19 percent from the state, and 9 percent from other sources.9 Rankings of expenditure percentages and performance scores in each of the essential service categories and administration are presented in Table 1 and Figure 2.
The category receiving the highest percentage of expenditures in the DHSS was essential service 3 (inform, educate, and empower) (17.1%). Performance in this category was measured at 47 percent and ranked fourth. The Division of Community Health accounted for the highest allocation (24%) of its total expenditures to this category. The second highest level of expenditures was in essential service 7, which was divided, as stipulated by the Crosswalk guidelines (Table 2), into two categories: 7(a)—assure the provision of care (15.0%), and 7(b)—link people to needed services (11.8%.) The NPHPSP classified performance of essential service 7 as a single score, and this category was rated as 39 percent, sixth in overall DHSS rankings for performance. The senior services function that was transferred to the agency in 2001 had 99 percent of its payroll expenditures allocated to essential service 7(a)—assure the provision of care. Forty-five percent of that division's contract expenditures were allocated to 7(b)—link people to needed services. Also, the Center for Health Information Management and Evaluation had 26% of its payroll expenditures in essential service 7(b)—link people to needed services, given its responsibility of developing and maintaining data systems to support services as stipulated in the Crosswalk tool.
The third-highest ranking expenditure category was essential service 9 (evaluate health services) with 10.2 percent and had a performance score that ranked seventh at 31 percent. The Division of Senior Services and Regulation had 54 percent of its payroll expenditures in this category.
The administration category ranked sixth, receiving 7.1 percent of agency expenditures, with no score for performance since it is not a category measured under the NPHPSP. This was only slightly above essential service 4—mobilize community partnerships (6.6%).
In contrast, the three highest performance scores were essential service 2 (diagnose and investigate) (72%) but ranked eighth in expenditures (5.1%); essential service 1 (monitor health status) (50%) ranked fourth in expenditures (8.8%); and essential service 6 (enforce laws and regulations) (48%) with expenditures ranked as ninth (5.0%).
Interestingly, the three categories with the highest percentage of expenditures—essential services 3 (inform, educate, and empower), essential service 7 (assure the provision of care and link people to needed services), and essential service 9 (evaluate health services)—had relatively low performance scores, ranging from 47 percent to 31 percent, with the median performance score for all 10 categories being 41.5 percent. Conversely, the three categories receiving the highest performance scores—essential services 2 (diagnose and investigate), essential service 1 (monitor health status), and essential service 6 (enforce laws and regulations)—actually had relatively low percentages of expenditures ranging from 8.8 percent to 5.0 percent as illustrated in Table 2. It was observed that essential service 5 (develop policies and plans) and essential service 10 (research) were ranked low in the percentage of expenditures consumed, 9th and 10th, receptively, and did rank relatively low, 8th and 9th, in performance as well. As with all of the other categories, the remaining two essential service categories, essential service 8 (assure a competent workforce) and essential service 4 (mobilize community partnerships), there were no clear consistent patterns observed between rankings of expenditure percentages and performance scores.
It should be noted that the state of Missouri does not include fringe benefit amounts in the appropriation to state agencies. Accordingly, fringe benefit expenditures were not included in this study. Since the fringe benefit rate is a constant percentage (40.4% for fiscal year 2004), applying that amount to each of the expenditure categories in this study would not have significantly changed the category expenditure percentages when compared to total expenditures nor would the rankings have been different.
The purpose of this study was to measure the amount of financial resources used to achieve levels of performance. Alignments of expenditures and performance levels were not predicted. However, a significant observation provided by this study was that, generally, expenditure levels were not aligned with performance scores. With the exception of relatively low performance and expenditure percentages in essential service categories 5 (develop polices and plans) and essential service 10 (research), there was no clear consistency of funding and performance in any of the other categories. One possible explanation for low performance and expenditures in essential service 10 (research) is that state health departments do not typically have large capacities for research. As such, performance and funding dedicated to this activity would be relatively low.
DHSS management was not entirely surprised by the finding of a lack of clear patterns for alignments of funding and performance. First, it is acknowledged that some programs and services have different levels of funding requirements to meet performance expectations—some programs simply cost more to operate. Also, a reservation from the beginning was that the Performance Assessment Instrument was a measure of system performance and not exclusively of the agency. However, the only expenditures that could be reasonably measured were those of the DHSS. Also, the agency acknowledged concern that utilizing a system performance measurement methodology did not necessarily help DHSS determine what level of agency investments were needed to improve agency performance, a responsibility and function under their direct control. An additional instrument to measure agency-specific performance would have greater management utility for analyzing and comparing funding to performance levels.
A component of the study that was particularly helpful was the measurement of administrative expenditures. Administrative expenses are typically used in healthcare organizations as a measure of productivity.4 This study found that the administration category represented 7.1 percent of total expenditures in the DHSS. While it is acknowledged that there most likely were differences in allocation methodologies between the Maryland study and that of the DHSS, the Maryland state agency category for coordination and administration accounted for 10 percent of total agency expenditures. The range among Maryland state offices that participated in that study was 13 percent to 24 percent. A 1998 study in New Jersey found that administrative services accounted for 20 percent of public health expenditures.10 This was not considered a good measure for comparison since allocation methods were not provided for review in the New Jersey report. The DHSS considered the Maryland study to be a valid comparison. It took an aggressive approach to allocating administrative expenditures in comparison to Maryland guidelines. Legislators in recent years had targeted budget reductions to administration, assuming that a large percentage of agency funding was directed to this function. Since the DHSS financial coding system did not track administrative expenditures, the DHSS was challenged in communicating that the agency did not spend unusually high amounts for administration. This study provided clear evidence that would be useful for informing policymakers. However, the ability to benchmark with other similarly organized state agencies could provide additional opportunities to examine this category more closely.
It should be noted that even though essential service 7-b (assure provision of care) had 15 percent of total expenditures, DHSS funding does not support the direct provision of personal healthcare. In contrast, the Maryland study had 80 percent of agency expenditures to essential service category 7(b) (link people to needed services). Benchmarking in this category is difficult since there can be great variability in services provided in state public health agencies and also in the range of activities for this category as stipulated in the Crosswalk guidelines. This highlights the degree of difficulty encountered when trying to benchmark financial data in public health under existing methodologies. However, it also highlights the potential value of peer analysis as well.
Study Limitations and Challenges
There was a major limitation to this study for comparing expenditure patterns in each of the 10 essential service categories to performance scores. The Performance Assessment Instrument was designed to measure state public health system performance. According to the Performance Assessment Instrument guidelines, organizations required to participate in completing the survey include the state health department as well as others in the state that contribute to the delivery of public health services (eg, hospitals, schools, universities, not-for-profits).8 This proved to be a particularly difficult concept for some to comprehend and many organizations did not participate. As a result of this design, the performance scores in each of the 10 essential service categories represent the combined contributions of the state public health agency and other organizations willing to participate. However, this study categorized only the expenditures for the state public health agency. Gathering financial data from all entities considered as contributing to the performance of public health services in a state was considered highly unrealistic. Some of the organizations considered under the Performance Assessment Instrument to contribute to public health services such as universities do not separately capture and categorize expenditures in their accounting systems for public health services. Also the DHSS preference would have been to undertake an agency-wide performance analysis for purposes of examining organizational patterns under which it has direct control. The Performance Assessment Instrument was not designed for that purpose. On the basis of these issues, it is unlikely that the agency would undertake a similar study in the future using the same system performance instrument.
The organizational structure of the DHSS could prove to be an additional limitation of this study. There is great variability in the types of services performed by state health agencies. Comparing expenditure percentages in each of the categories to other expenditure studies of state public health agencies that do not include senior services functions could be misleading since services performed are so dissimilar.
The DHSS also found the process to be time consuming and incredibly arduous, given information age technologies. The Crosswalk was a complex feature and was prone to be very subjective, ambiguous, and, in some aspects, vague. It is difficult to envision that such an instrument alone could facilitate the collection of valid, reliable, and verifiable data. This was also mentioned as a concern in earlier expenditure pilot studies and a 1997 Lewin Group report.11 As such, these documented concerns should not be summarily dismissed. Also, any serious considerations to the collection of national financial data must include very specific definitions for each category and the categories must clearly meet the needs of stakeholders. The use of the essential services and a tool designed for public health activities was very challenging when applying it to social service–type activities. As was indicated also in the Lewin Group report,11 the essential service categories do describe very well the framework of public health for some purposes, but uniform decision rules would be critical to developing meaningful datasets.
An alternate methodology that could aid public health in quantitative financial analysis is a standard chart of accounts. Charts of accounts are universally accepted financial management tools that provide a framework to organize financial data. This is a generally accepted accounting tool to facilitate the collection and sorting of data by cost centers, product lines, functional categories, or any other classifications deemed necessary for evidence-based decision making. Collection of these data manually would be considered very costly, unreliable, and inefficient by other industries. A uniform chart of accounts could be a solution if this is shown to be plausible for a profession with so many government financial reporting structures. A strategy to classify individual programs (eg, family planning, immunization, surveillance) into standard essential service categories could be a uniform allocation methodology adaptable to a chart of accounts. Without this level of standardization, however, invalid and unreliable datasets with little utility for peer analysis and relevance to stakeholders might be created. If demonstrated to be feasible, a chart of accounts would have benefits even given the variability of state and local budgeting frameworks. It is already a common accounting practice for individual agency account categories to be aggregated into state and local government specific budgeting and reporting frameworks. Including finance professionals in the design of financial data-collection methodologies could help translate some of these practices to public health.
The collection of public health financial and performance information must be feasible, relevant, and purposeful. Public health leaders need this information not only to measure agency financial and operational performance but also to demonstrate outcomes and the value added by the profession. The Institute of Medicine advocated for national plans to collect financial data aligned with the 10 Essential Public Health Services,3 and even though it has been earmarked for deletion, Healthy People 2010 included a goal (#23-16) to collect data by the 10 essential services also. Given this level of national interest, identification of methodologies to effectively measure financial and agency performance is imperative. This study was an initial attempt at this type of analysis. While this study and results did not totally meet the needs of agency leaders, it was an important step in demonstrating the efficacy of the concept.
© 2007 Lippincott Williams & Wilkins, Inc.