Optimizing resource allocation is essential for effective academic health center (AHC) management. In making resource allocation decisions, managers in AHCs are typically guided by opportunities and constraints that are specific to their institutions. Tools such as mission-based budgeting, all-funds budgets, and funds-flow analysis are used to help managers allocate resources from a relatively fixed pot.1–6 The assumption is that opportunity costs are weighted appropriately in so doing. That is, managers will target resources to the projects that have the highest institutional priorities.
Although this process is intuitive, it is not the same as optimizing resource allocations to achieve specific benchmarks. Doing the latter requires that institutions have targets that they are “managing to” or “managing toward.” Such metrics are derived either from comparisons across AHCs7,8 or by applying tools from accounting, finance, economics, and decision support to derive “optimal” goals.9–14 In the clinical arena, and particularly for practice plan and hospital management, such targets exist. They have been developed through a process extending over many decades, requiring uniform definitions of categories and substantial data analysis capacity. Metrics available through the University Health System Consortium, Medical Group Management Association, Faculty Practice Solutions Center, and more define standard (often considered best) practices and provide institutions with targets to “manage toward.” At the most general level, this allows organizations to “right size” their resource allocations.
The situation is decidedly different in the research and educational arenas of AHCs. There is a paucity of common databases and management tools equivalent to those in the clinical arena. Hence, typically there is a tenuous basis for determining whether the optimal investment has been made (or is it too large or too small?). The implicit assumption, particularly in the research arena, is typically that bigger is better, despite the widely accepted dilemma that even the most desirable sources of research funding underrecover total research costs.15
There are myriad reasons for this situation. There is often limited motivation on the part of individual AHCs to engage in external data collection and analysis as a basis for resource allocation in education or research. It is commonplace for institutions to imagine that their resource allocation decisions for education and research are unique and cannot be compared in any useful manner with those of other AHCs. Research in these domains is not organized to readily allow cross-institutional comparisons. For example, there has not been substantial research on educational epidemiology, to use the term suggested in 2004 by Carney et al.16 That is, there has been limited application across the physician and health professional education continuum of observational and randomized experimental designs to probe best practices. In the research arena, despite the availability of data sources and modes of analysis that are common across institutions, and an array of aspirations which are widely shared,17 there is little or no coordinated effort to use the data to derive “best” or even “common” practices. Arguably, a centralized process and organization is needed to coordinate this endeavor.16
In this article, we take a step in that direction, with the goal of improving overall AHC management. Our primary purpose in writing this article is to set the stage for a detailed answer to the following question: What are the common or distinctive features of an AHC, independent of size and institutional configuration, that drive or result from resource allocation, in the research and educational arenas? The longer-term goal is to determine how commonalities and/or differences can be used to guide institutional decision making, particularly around resource allocation.
2007–2008 Academic Health Center Census
In 2007–2008, the Association of Academic Health Centers (AAHC) conducted a survey of member institutions to collect information on all AHC activities, including all missions and all health sciences professions. The 2007–2008 AAHC survey was divided into eight sections, covering institutional identity (including trends), education, health care delivery, finances, operations, information systems, research, and facilities. Questions with qualitative (nonnumerical) responses mainly addressed organizational structure and attitudinal areas. Questions with numerical responses were directed toward staffing, organizational scale, and financial operations. Including all possible answers, more than 900 responses were requested from each institution. Hence, the census instrument was designed with the explicit expectation that different parts would be filled out by separate individuals, including those in individual health science colleges.
A total of 79 member institutions submitted census responses (78% participation rate), with substantial and expected variability in the comprehensiveness of the responses. Fifty-four of the responding institutions (68%) provided comprehensive information. The results were summarized in a confidential binder provided to all AAHC members, but without identifying individual institutions by name. Most responses were collated and compared by absolute values, predominantly reflecting differences in institutional scale and scope, with additional discrimination by factors such as public versus private and hospital ownership.
For this article, responses from the AAHC 2007–2008 AHC census were analyzed using a different approach: ratio analysis.7,18 The underlying concept was to normalize data from an individual institution to that same institution, by creating a ratio of two separate values from the institution. The ratios were then compared across institutions. For example, rather than comparing different AHCs based on the absolute values for total number of faculty FTEs, or total number of FTEs (i.e., faculty and staff), the ratio of faculty FTEs/total FTEs was calculated for each institution, and then values were compared across institutions. As another example, rather than comparing different AHCs based on total payroll expenses, or total operating expenses, the ratio of payroll expenses to operating expenses was determined for each AHC, then compared across AHCs.
This strategy minimizes the effect of institution size on the responses, size being the predominant limitation of using absolute values for developing meaningful metrics. A range of responses are generated that are effectively normalized. Displaying these responses in graphical form is an effective approach to determine both the shape and the range of the distribution. The data can be readily scrutinized to determine where any given institution falls within the distribution. By comparing the values from different types of organizations, ratios that are generally applicable across AHCs versus those that distinguish an AHC are evident. Ratios that, in their current forms, are not usable or interpretable are also apparent, because they range widely with no apparent pattern. This most commonly occurs when the numerator and/or denominator do not have standard definitions and, hence, are accounted differently across the various AHCs.19 Also limiting is when the value being captured in the numerator and/or denominator ratio is a composite value derived from highly heterogeneous parameters. Given the nature and purpose of the data collection for the 2007–2008 AAHC census, we anticipated that responses would fall in all of the above categories, which, in part, would help to direct a second phase of the project.
All attention in the current presentation is directed toward analysis of those numerical data from the 2007–2008 AHC census that are amenable to ratio analysis. The majority of the numerical data were collected in two categories: (1) enrollments, faculty, and staff, and (2) financial performance and operations.
Based on data collected, some of the ratios could only be calculated for the entire AHC, whereas others could be calculated for individual health professions colleges or programs. Data from the following ratios will be discussed, in categories defined below as staffing ratios and operating ratios.
We calculated two categories of staffing ratios:
* Total AHC faculty FTEs/total AHC FTEs
* Total enrollment/total faculty (by health professions discipline)
We calculated five categories of operating ratios:
* Total AHC payroll/total AHC FTEs
* Total AHC payroll/total AHC operating expenses
* Total payroll/faculty and total operating expenses/faculty (by health professions discipline)
* Total operating expenses/total enrollment (by health professions discipline)
* Operating expenses by source/total operating expenses
We presented data for each calculated ratio in those categories in one or both of the following forms:
* All institutions, in sorted order, from smallest value to largest value
* Histogram, with institutions clustered by ratio values, to demonstrate the distribution of responses.
Total AHC faculty FTEs/total AHC FTEs.
Institutions provided values for total AHC faculty FTEs and total AHC FTEs. The ratio of total faculty FTEs to total FTEs was determined (see Figure 1, panels 1A and 1B). In the most general sense, this ratio is a reflection of the number of staff to support each faculty member.
The individual responses, with values for each institution, are illustrated in Figure 1, panel 1A. As explained in more detail in the figure legend, private institutions are designated with an “A,” and public institutions with a “B.” The associated numerical designation indicates where the institution falls in terms of National Institutes of Health (NIH) funding in 2007–2008, if all AHCs are aggregated by quintiles. For example, B-1 is a public institution in the top quintile of NIH funding.
Although the spread of values in Figure 1, panel 1A is wide, the appearance is deceptive, because the y-axis must be expanded to capture values for the four institutions with the highest values. A more useful representation is the histogram shown in Figure 1, panel 1B. Individual institutions are aggregated into discrete ranges. There was an obvious clustering around a peak value, with one-half of the institutions displaying values between 0.10 and 0.15 and two-thirds showing values between 0.1 and 0.3. Taken literally, this suggests that each faculty member is supported, directly or indirectly, by three to six additional personnel.
Values for total AHC faculty FTEs/total AHC FTEs of greater than 1.0 cannot be correct. This contributes to the wide distribution of values in Figure 1, panel 1A, which is less impactful when data are expressed in histogram form (e.g., Figure 1, panel 1B). We estimate that 12% to 15% of values for all ratios reported below may also be incorrect. This does not obviate the primary observations we have made in this article, as discussed further below.
More important, this is a ratio that is confounded by ambiguous definitions for the numerator. Faculty numbers are notoriously difficult to compare across institutions, given the range of faculty tracks and the inclusion (or not) of volunteer, part-time, adjunct, non-tenure-track, and research faculty, and more. Even when comparing values for a single institution from different sources (U.S. News & World Report, National Science Foundation, Association of American Medical Colleges, institution Web sites, and more), values can vary substantially.
The breakdown of categories contained with total AHC FTEs will vary widely across institutions. However, the fact that it is a total number argues that its use in the denominator is more valid. For example, regardless of how faculty FTEs are compartmentalized in the numerator, they will be simultaneously captured in the denominator.
Most institutions have explicit or implicit guidelines for administrative infrastructure and core support staff per faculty, yet this information is not typically available for perusal, let alone for comparisons that would allow for meaningful decision making. Decisions about consolidating or dividing administrative functions for multiple units typically revolve around projected efficiencies, using internal data, rather than derived from “best practices” models. This is particularly true in the research arena, whether dealing with individual units (departments, centers) or at an institutional level (e.g., grants and contracts, research compliance). In the clinical domain, especially for hospital staffing, there is a substantial body of published research data from an array of institutions,20,21 guiding institutional decision making. It is reasonable to expect that, with appropriate and granular subdivision of categories, a similar situation would apply to the educational and research domains.
Total enrollment/total faculty, by health professions college.
Each health professions discipline was asked to provide a single number for total faculty and total enrollment. The ratio of enrollment to faculty was calculated for each discipline, at each institution, and values were compared (Figure 2). Medicine displays the lowest ratios, with values between 0.15 and 3.0. Fifteen of the institutions in the lower half for medicine were research-intensive (top two quintiles for research funding) versus five research-intensive institutions in the upper half (P = .025), reflecting a larger number of faculty per enrolled medical student in the former case. Dentistry, pharmacy, and nursing had peak ratios for total enrollment/total faculty of approximately 2, 9, and 10, respectively, with distributions showing a rightward tail. (Note the break in scale on the x-axis for values above 2.0.) These values reflect a progressively larger number of students per faculty member when dentistry, pharmacy, and nursing are compared. In contrast to medicine, values were no lower for research-intensive institutions than for less research-intensive ones.
These ratios are again confounded by using total faculty numbers. Total numbers for enrollment tend to be more uniform when comparing different sources of data, with variation being reflected more in the subsets (undergraduate, graduate, doctoral) than in the overall summation (K.A.J., personal observation).
Total AHC payroll/total AHC FTEs.
Values for total payroll/total FTEs, reflecting the average salary (and fringe benefit) expenses for all AHC employees, lie predominantly between $40,000 and $100,000/FTEs (Figure 3, panel 3A). There is an approximately normal distribution (Figure 3, panel 3B), with half of the institutions being between $60,000 and $80,000/FTEs, and two-thirds between $50,000 and $90,000/FTEs. There is no obvious clustering by public versus private or by research intensity. Perhaps surprisingly, there was also no obvious difference by geographic region (not shown). In particular, values for AHCs in the Northeast are no higher than for institutions in regions with lower costs of living.
In general, payroll expenses are not included in publicly available sources of data for AHCs. This is one of the most important contributions of the AAHC census for this analysis. Payroll numbers are readily available in AHCs, regardless of the financial system in use. It is important to point out that payroll includes, but is not limited to, faculty salaries, thereby reflecting a fuller view of personnel expenses than can be gathered from faculty salary comparisons.
As is the case for all ratios derived from total AHC values, and as discussed above, detailed discrimination by mission and health science discipline would be essential before considering the values useful as management tools. Market forces for support staff salaries will vary locally, and there is no expectation that even highly granular data will drive institutional decision making for most AHCs. Instead, the information would be most useful for those AHCs with values well outside the “normal” range.
Total AHC payroll/total AHC operating expenses.
Total payroll/total operating expenses is one of the tightest of all of the ratios (Figure 4, both panels). Nearly 60% of all responses are in the range between 0.44 and 0.60. Because values for the seven institutions with payroll/operating ratios >1 are not correct, using standard definitions of operating expenses, the clustering is even more notable.
The values for total AHC payroll/total AHC operating expenses were compared for the following categories:
* Public versus private
* Research-intensive (top two quintiles of research funding) versus non-research-intensive (bottom three quintiles of research funding)
* Size, as reflected in total enrollment, total FTEs, total hospital revenue, or total operating expenditures. In all cases, ratios were compared for the top half versus the bottom half
* Organizational structure, as reflected in health system control or hospital ownership
Only in the last category—organizational structure—was a significant difference noted (Table 1). The ratio of total payroll/total operating expense for AHCs with owned or partially owned hospitals was significantly lower than for those with no hospital ownership. Although ratios were lower for AHCs that controlled their health systems than for those that did not, the difference was not statistically significant.
These values can be put in the context of payroll/operating expense ratios in various sectors of the economy. In the hospital sector, payroll/operating expense ratios cluster around 0.5, regardless of hospital type (e.g., large university teaching hospital, small community hospital). The ratio for faculty practice plans is typically in the range of 0.8.22 A ratio of approximately 0.6 is more characteristic of universities as a whole, although data are limited, and the educational service sector as a whole has a ratio of 0.5. (By comparison, values for durable goods manufacturing, construction, and retail wholesale trade are in the range of 0.2.23) Whether the constancy of these ratios reflects predominantly the nature of the services provided (and hence the staffing complement needed to provide those services), the distribution and sources of revenue, a general notion of organizational structure, or, more likely, a combination of all three, the ratios are used as management tools to make resource allocation decisions.
Our data suggest that the ratio of payroll/ operating expenses, if refined by more precise data collection, could be used in a similar fashion for the academic missions of teaching and of research. It is almost a certainty that the ratio calculated from the AAHC census data, as shown in Figure 4, panel 4B, is lower than actual, largely because some components of payroll expenses are not included in the numerator. As verified in discussions with financial leaders of several institutions, not all payroll expenses for personnel managing practice plan revenue cycle were included in the numerator, yet all operating expenses from the practice plan were included in the denominator. In another instance, a portion of payroll expenses for clinical faculty covered by the hospital were not included in the numerator.
In the majority of cases, the values entered for total operating expenses for medicine in the AAHC survey were the same as those entered in the institution's required Liaison Committee for Medical Education (LCME) Annual Financial Questionnaire on Medical School Financing (K.A.J., personal observation). As discussed below, and elsewhere,19 this is advantageous and appropriate, by minimizing duplication of effort, ensuring internal consistency, and providing longitudinal data. This strategy can be generalized to other categories, using information in databases such as the Association of American Medical Colleges Medical School Profile System.
As a more general conclusion, the ratio of payroll/operating expense has units of dollars in the both the numerator and denominator. Such ratios are intrinsically more likely to be comparable across institutions, because there is no ambiguity about the unit of measure—rather, only in what gets included in the category.
Total payroll/faculty and total operating expense/faculty.
Related perspectives are to determine the average payroll expense per faculty member, reflecting the payroll (and fringe benefit) costs to support the faculty member and associated staff, and the operating expense per faculty member. Values vary over a wide range, regardless of whether values for tenure-track, tenure and non-tenure-track, or faculty FTEs are used in the denominator (not shown).
The wide range of values reflects numerous considerations. The biggest confounder is the ambiguity in definitions of faculty numbers, as already discussed. More interesting is the strong tendency for the highest values for both ratios to be from public institutions. For example, the 17 highest values for payroll/total tenure and non-tenure-track faculty were from public institutions, compared with 21 public and 16 private for the 37 institutions with lower values. Nine of 18 with the highest values in this same category were from public institutions in the top two quintiles of research funding (B1 and B2). By comparison, only 3/36 of the lower values were from B1 or B2 institutions (P = .002). Out of 54 institutions, the highest 18 values for operating expenses/total tenure and non-tenure-track faculty were from public institutions, compared with 19/36 public institutions with lower values (P = .035). Although the intuition is clear for why research-intensive institutions would have higher values for payroll/faculty, given the large number of nonfaculty personnel associated with the research mission, it is less clear why public institutions should dominate the high values for both ratios.
Total operating expense/total enrollment (by health professions discipline).
Values for operating expense per enrolled student are shown for dentistry, medicine, nursing, and pharmacy (Figure 5). The enormous range of values when compared across health professions is notable, with values for 29 out of 34 nursing schools being less than $25,000 per enrolled student, ranging up to greater than $1.8 million/enrolled medical student (mean of $591,200 and median of $378,100). Note the break in scale on the x-axis for values above $200,000.
In the published literature, the projected costs of medical student education on a per-student basis vary over a range of more than 10-fold, largely as a reflection of how the question is framed.24 Differentiating operating costs/student according by subcategory, with appropriate distinction between direct and indirect costs, provides one approach to generating values that can be readily compared across AHCs.
Percentage of operating expenditures by source (health science college, hospital, practice plan).
Seventy-seven percent of the responding institutions (61/79) had at least 80% of operational expenditures explained by administration, hospital, medicine, and practice plan (not shown). Institutions with the lowest percentage were osteopathic schools. The fact that such a high proportion of total operating expenditures can be captured using publicly available data from annual reports provided to the LCME (e.g., for medical schools, hospitals, and practice plans) is germane for future projects.
We estimate that 12% to 15% of the values for all ratios reported above may be incorrect. However, this is neither surprising, given the initial design of the survey, in which different sections were completed by different individuals, nor does it detract from our major message of the utility of ratio analysis in providing comparisons among institutions. There was no attempt herein to modify or omit any of the values, or to apply accounting rules, even in circumstances where the values could not be correct, based on first principles (e.g., payroll-to-operating expense ratios >1). Instead, they were illustrations to show that ratio analysis is a sensitive way to determine whether information provided from a single institution is internally consistent, and to suggest those circumstances in which questions were interpreted differently by individual institutions. Ratio analysis also provides a targeted approach to soliciting missing information for those ratios that are judged to be most useful. Most important, it creates the opportunity to identify some common principles.
There is no implication that the data collected in the 2007–2008 AAHC census are highly current. Nor is there an implication that the values calculated will remain similar when data using more detailed definitions are collected. Rather, the analysis is intended to demonstrate a strategy, before undertaking a systematic project, to define parameters and collect information using standard definitions.
Implications for Future Work
Our primary purpose in writing this article was to set the stage for a detailed answer to the following question: What are the common or distinctive features of AHCs, independent of size and institutional configuration, which drive or result from resource allocation, in the research and educational arenas? The longer-term goal is to determine how commonalities and/or differences can be used to guide institutional decision making, particularly around resource allocation.
Our data provide a justification for pursuing this strategy, when coupled with the rationale presented in the introductory paragraphs. Additional arguments lend credence to this approach and are discussed in more detail elsewhere.16,19 These include the following considerations.
First, much if not most of the needed data are collected on a regular basis by AHCs—there is no need to start de novo. This includes information that meets highly precise, common definitions.
An example of this approach, applied to an essential issue for AHCs, is space management. Institutions monitoring cost-efficiency and adequacy of overhead support for space utilization (whether wet lab, dry lab, office, educational space, or otherwise) invariably use calculations derived from internal data because there are limited publicly available data for comparisons across institutions.25
In fact, precise categorization of quantity and quality of research and education space is included in A21 reports to the Office of Management and Budget, conducted using highly detailed and uniform definitions for all AHCs. In conjunction with the data on funding (grant, tuition, state support, and other) used to support the space, also defined in precise fashion, ratios for dollars/net square foot are readily comparable across AHCs. This would require that institutions provide their A21 data to a central repository. This information is often considered to be sensitive because of the implications for indirect cost recovery negotiations, reflecting a larger issue associated with dissemination of information now considered confidential. Deidentifying institutions, as done herein, provides one obvious approach to this issue.
Second, there is a substantial body of publicly available data, from associations, foundations, accrediting bodies, governmental organizations, and more, that can be “mined” for inclusion. The data are typically longitudinal, updated regularly, cross-sectional, complete, and available for review. Student enrollments are a particularly good example. Even recognizing that information can vary depending on the source, this is a rapid and inexpensive first step.
Third, the most compelling argument is that the intrinsic functions in education and research, just like the intrinsic functions in the clinical arena, are similar, irrespective of institution. Tasks such as giving a lecture, coordinating a team teaching exercise, conducting wet-lab or dry-lab research, and more are intrinsically the same functions, independent of institution or geographic location.
The reaction of many readers to this last statement may be that it does not take into account institutional history, culture, expectations, finances, management, and other considerations. We recognize that all of those factors will contribute in some way to the institutional data. The purpose of the proposed project is not to homogenize these individual functions across institutions. Rather, it is to provide a range of data, standardized for inclusion, that serve as a metric for comparison and that can be used as desired for making resource allocation decisions. Currently, institutions that lie in one of the tails of a histogram distribution will likely conclude that their position reflects differences in data collection (if they do not like their ranking) or perhaps reflects institutional strategic exceptionalism (if they do like their ranking). If data were validated and derived with uniform definitions, it would instead become a management tool.
The effort, expense, and organizational infrastructure to pursue this approach are substantial. A high level of trust about the data management process would be required. The benefits would not be realized for some time. Nonetheless, and especially in the face of the increasingly difficult medium- and even long-term future for research and educational funding, we believe this a highly worthwhile investment, one that will generate valuable tools for the strategic management of AHCs.
The authors acknowledge the insightful comments of Anthony Knettel in reviewing the manuscript.