Academic anesthesia departments in the United States have clinical, educational, research, and administrative responsibilities. A medical school usually expects a department and its individual faculty to be successful in all of these endeavors and/or clinical enterprises (i.e., hospital responsibilities), independent of the source of funding. Some departments have tried to base their financial compensation plans on success in all four of these responsibilities, even though funding is primarily from clinical service (1). However, in the last 10–15 yr, health care reform (i.e., managed care) has made more demands and provided less money from clinical activities. The dangers to academic groups were increased. Recommendations of improving productivity and increasingly providing financial bonus payments to faculty instead of salary (2) are examples of restructuring academic departments of many specialties in response to managed care (3).
The availability of nonclinical time for academic pursuits frequently convinces anesthesiologists to choose academic careers rather than private practice. Yet most of the financial support for academic anesthesia is derived from clinical productivity. Therefore, maximum and effective use of faculty clinical time is crucial.
Departments of anesthesia have used various approaches to determine how much nonclinical versus clinical time an individual faculty member receives (1,4–5). The time available for clinical activity is a common measure of a faculty member’s clinical obligation. Typically, for example, a 60% clinical commitment could indicate 3 days/wk in the operating room (OR) (or other clinical activities) plus night call; 4 days/wk of clinical responsibility would indicate an 80% clinical commitment. In an academically and clinically diverse department, how should the assigned clinical time be quantified?
Recently, Abouleish et al. (4) retrospectively analyzed different methods of measuring clinical productivity of individual faculty. The advantages and disadvantages of using units of time/day worked, clinical days worked, and/or cumulated number of American Society of Anesthesiologists (ASA) units earned were thoroughly and extensively described. However, the retrospective nature of their analysis prevented determining whether the method of measuring productivity would influence faculty behavior.
Abouleish et al. (4) concluded that the normalized clinical days per year (NCY) approach was the most helpful measure of productivity. This method is very similar to what our department has been using for over 30 yr. Because it does not measure the actual amount of clinical care given, we think it is a measure of clinical “availability” not actual “productivity.” Abouleish et al. (4) agree that such a system does not “provide rewards or incentives to work hard on that day of work.” They emphasized that using the NCY approach avoids the assessment of whether a faculty is a “high producer” or “low producer” versus “assignment to high-unit clinical sites” or “assignment to low-unit clinical sites.” We think that this system is difficult to use in a department with multiple subspecialties and a large nonspecialty or generalist anesthesia (GA) group. For example, is a day in ambulatory anesthesia the same as a day in cardiac anesthesia? How should night call be quantified when it differs in intensity (e.g., amount of anesthesia given/night) and frequency (e.g., number of nights/month) among specialty groups?
We hypothesized that despite equality of clinical days assigned, actual clinical productivity (e.g., amount of anesthesia given) varied considerably between and within specialty groups, including GA. Clearly, a quantitatively accurate assessment of clinical care productivity would attenuate unresolved concerns about unequal distribution of clinical responsibilities. Furthermore, faculty may not be motivated for longer or extra cases in a given day when needed if an accurate accounting for their productivity during this time is not available. In other words, an “availability” accounting system does not reward the clinician that volunteers for extra clinical care or delivers efficient clinical care (e.g., turnaround times.) Last, billing is not effective when individual faculty do not accurately and completely provide all necessary information to the billing agent. If their salary were dependent on the accuracy and completeness of their anesthetic record, perhaps they would be more attentive to record keeping.
After listening to lengthy debates about the relative contribution of individual faculty members during our faculty meetings for over 20 yr, one of the authors (RDM) decided that we needed to switch from a system of “availability” (e.g., amount of time available for clinical duties) to a “productivity” based system (e.g., actual amount of clinical volume produced.) Although there are many possible approaches (e.g., number of ASA billing units), we wanted to quantify the clinical productivity by measuring the actual amount of time spent delivering anesthesia care. We also wanted to reward faculty for their clinical productivity independent of their specialty; this ruled out using the ASA relative value guide. After consultation with many groups, including private practitioners, we developed a system of “billable hours” as a measure of clinical productivity for academic anesthesiologists. In this system, faculty who met their clinical commitment, as defined by “billable hours,” would receive their negotiated salary. In theory, those who exceeded their commitment would receive more money or less clinical time in the future. Conversely, those who did not would receive less money and/or a possible decrease in salary or more clinical time in the future.
We herein will describe the system and our experience with it. The system confirms our hypothesis that “availability” methods (e.g., days/week available) produce quite different results than a “productivity” method (actual time of anesthesia). Also, we have subjective evidence that faculty behavior has been influenced by this change in the way that faculty clinical productivity is quantified.
All faculty who have surgical and obstetric anesthesia responsibilities at the University of California, San Francisco, Moffitt-Long and Mount Zion hospitals were included in the changeover from the “availability” to the “billable hours” system. Faculty whose primary surgical and obstetric responsibility was not clinical anesthesia (e.g., chronic pain and critical care) were not included in this analysis.
Part-time and volunteer faculty were omitted from the analysis. Faculty who had <6 mo of employment (either newly employed or leaving) and international visiting anesthesiologists were also omitted.
Multiple different service or groups are defined based on call duties performed, cases performed, and conditions of employment. The four call specialties include cardiac (EH), liver transplant (EL), pediatrics (EP), and the nonspecialty general anesthesia call. Several volunteer or part time faculty also do not have call responsibilities. Two faculty perform call duties on more than one service (one does pediatrics and cardiac; the other does pediatrics and liver transplant.) For the purposes of the analysis, the first one was omitted from the cardiac group and the second from the pediatric group, where they perform most of their duties.
Only faculty taking full call were included in the analysis. Three GA faculty and five international faculty take minimal or no call. The intensive care unit (ICU) faculty take their call in the ICU, so they are not included in the analysis. Faculty who have clinical responsibility on the acute pain service perform 50% of their call in the OR as a result of pain service call commitment and were excluded from the call analysis. At Mt. Zion, two faculty do not take full call because of commitments on the chronic pain service and were not included in the analysis.
The unit used for determining a faculty’s clinical credit is “billable hours”. A billable hour is defined as that unit of measure for the actual delivery of anesthetic clinical care; preparation for delivery of clinical care (e.g., preoperative evaluation) would not constitute a billable hour. The billable hours were extracted from the anesthetic record by administrative personnel. The credit given to a faculty was the total time for delivery of anesthesia clinical care, as indicated by the anesthetic record.
A few additional credits were added to actual billable hours on the anesthetic records. For each new case started, a faculty would receive 15 min time in addition to the billable hours calculated from the anesthetic record. In addition, an additional 20% of the total billable hours was added to the total credit when clinical care was delivered on weekends or after 6:00 pm during the week. When faculty were supervising two rooms at the same time, only one billable hour was counted. In other words, one could not receive more than 1 h of billable hours per 1 h of time (5). A computer program-algorithm checked all other cases performed by the faculty for overlap so that concurrent time would not be counted more than once. This resulted in restricting the credit given to the actual time performing clinical services whether the faculty was supervising one or two rooms.
The only exception to the above calculations was in obstetric anesthesia. Labor epidural anesthesia was not checked for concurrency and was considered noncontinuous care. Faculty were given 1 h of total credit for labor epidurals. Cesarean deliveries and other obstetric procedures requiring the continuous presence of an anesthesiologist would have their billable hours calculated as is done in the OR.
Previously, a faculty was assigned a day in the OR. Credit given was the same, independent of the actual time of anesthesia given. Depending on the specialty, night call was of varying intensity and frequency. This faculty clinical requirement was transferred to the billable hours system. Previously, clinical assignments were assigned on a daily basis plus call. For example, a typical faculty member may have 3 clinical days per week plus night call. We arbitrarily assign an expected 7.5 billable hours per day of assignment. For each month, faculty were required to have 1.5 days of call assessed, which would translate into 11.25 billable hours. For example, for a 4-wk month, a faculty would be required to have 12 clinical days plus night call in the previous “availability oriented” system. With the billable hours system, they would be required to have 7.5 × 12, which equals 90 h, plus 11.25 h during night call for a total of 101.25 h per 4-wk period of time.
Calculations were then made for each individual faculty and the department overall with regard to expected billable hours and those hours actually collected. The term “credit” refers to the number of hours collected minus the expected hours. Individual reports were distributed to the faculty each month listing the cases performed and summarizing their clinical commitment, vacation, and clinical credibility/billable hours. Faculty are responsible for reviewing the information and making corrections if needed. An example of such a report is shown in Figure 1.
We performed some additional analysis over that described above. First, we determined the billable hours for each of the subspecialty services. We, in essence, calculated the total credit over or under the expected hours and made comparisons between the various clinical groups. We then compared the total individual expected hours versus the credit over or under for each faculty. The hypothesis was that individuals who have a large clinical commitment (e.g., 4 days/wk) would not be able to accumulate as many billable hours over their expectation as those who have small clinical commitments (e.g., 2 days/wk). Lastly, we compared the various clinical services with regard to the number of faculty and their call credit.
Differences in Call Credit and Credit Over were analyzed by analysis of variance, with the Tukey-Kramer test to compare all pairs of data for significant difference. These results were confirmed by a nonparametric test (Kruskal-Wallis) because of the distribution of the data. Multiple regression was used to search for predictors of Credit Over; including expected hours, and service. P ≤ 0.05 was considered statistically significant.
Nearly all faculty averaged more than the 7.5 expected billable hours per day (Fig. 2). Even though each faculty was assigned a “day” in the OR, there was considerable variability regarding the average billable hours per day obtained (Fig. 2). All clinical groups, except obstetrics, were clearly well over the total expected number of billable hours (Fig. 3). When the average mean individual credits were calculated per faculty, the GA group had delivered more extra billable hours than their expectations than did the other groups (Fig. 3) (P < 0.05). The GA group (E1) had more call credit than the pediatric service and the Mt. Zion faculty (Fig. 4). When individual faculty were analyzed, there was a large variability between and within groups (Fig. 3–4).
When the relationship between credit over versus total expected hours were compared, no correlation was found. This relationship was displayed by analyzing the former days/week of required clinical commitment versus over credit billable hours (Fig. 5). Even when compensating for differences in credit between services and a multivariant analysis, no relationship could be found. Lastly, most of the credit over expectation was obtained during weeknights and weekends (Fig. 4).
We conclude that the “billable hours” system allows an accurate assessment of individual faculty clinical productivity across specialties and clinical responsibilities. This system allows an objective analysis of whether faculty groups (i.e., specialties) and/or individuals are meeting their individual departmental commitment and/or clinical responsibilities in comparison to others. It also allows the department to objectively determine its personnel status relative to specific required clinical needs based on data analysis rather than the subjective opinions of faculty. Clearly, at the time this study was performed, it appeared that either our Department of Anesthesia was understaffed in certain areas or our billable hour expectations of faculty were insufficient. For example, perhaps our billable hours required for each faculty to meet their clinical commitment to the department were not large enough. Assuming that the expectations we set for faculty are not changed, then projected clinical responsibilities can be equalized among the faculty based on billable hours data. For example, extra clinical time would be assigned to those with a small number of billable hours and vice versa. Alternatively, if there is widespread overage and clinical responsibilities for those who have accumulated excess billable hours cannot be decreased, then additional compensation (e.g., financial) can be provided to those who exceed their clinical responsibility.
One of the greatest difficulties with increasing subspecialization in anesthesia is the distribution of call each night among faculty. By having multiple faculty on call at the same time because of subspecialization, the overall call burden is increased. With unpredictable clinical volumes on surgical services, such as liver transplant and pediatrics, using the “billable hours” system to “even out” the call is difficult. When there are uneven billable hours within a clinical service (e.g., pediatrics and GA—see Fig. 3), then faculty with smaller billable hours can be given more clinical responsibilities to even out their billable hours. However, when one subspecialty, as a group, has too many or too few billable hours, then only reducing or enlarging the number of faculty in that subspecialty will make billable hours meet expectations.
One of the most apparent results is that GA faculty were performing noticeably more clinical service while on call than other services. Our analysis of the over billable hours (i.e., credit) during call could have easily resulted in us concluding that our call expectation billable hours was too small. If we had increased the clinical productivity expected during call, our over credit probably would have decreased. However, it was our judgment that because performing anesthesia during off-hours is an onerous service, the small expectation for billable hours during call was a way for us to indirectly provide extra money via the billable hours to those people who performed this service on the basis of productivity.
A limitation of this system is that faculty clearly perform clinical activities for which they may not receive billable hour credit. For example, the generalist anesthesiologists (E1) received many more billable hours (high intensity call) even though their frequency of call was less often than liver transplant (low intensity) call. Clearly, the liver transplant group had much more nonproductive “availability” (i.e., through no fault of their own) than did the generalists. Other examples of this might be an extensive preoperative evaluation, responding to a code blue, and canceled cases, especially after preoperative evaluations. On the other hand, that is the current payment for individuals who might be in private practice. In other words, the measure by which society compensates us (i.e., the source of departmental clinical funds) is the basis of billable hours.
We arbitrarily added some limitations. Because we wanted to eliminate payer mix and clinical difficulty as criteria (i.e., all clinical productivity was treated equally), no credit was given for performing more difficult cases. Minimal extra credit was given for supervising more than one room (5). Each startup of a case was credited with 15 minutes, and with two rooms it is unlikely that a faculty member would spend time during the day receiving no credit if a single case is delayed. Thus, the desirable situation is to have one long room, which is the largest reward. However, in the previous system of using “availability,” there was no motivation to run more than one room either. The clear advantage of this system is that it uses an objective analysis of clinical productivity as a measure of an individual faculty member or groups of faculty to meet their clinical commitments. Quite the opposite of our previous “availability” approach, which is what many anesthesia departments use, it clearly added a different dimension to faculty behavior. This rewards an individual for performing clinical responsibility at onerous times (e.g., in the middle of the night), encourages continuity of clinical care by not promoting the desire for early relief, and strongly encourages faculty to properly fill out the anesthetic record. If it is unacceptable for billing, then faculty will not receive clinical billable hours credit.
How to determine what should receive “billable hours” is arbitrary. Unfortunately, our specialty’s reimbursement is based on actual time that anesthesia is given. We used that definition of productivity for our system. Clearly, there are legitimate arguments that other nonreimbursable clinical activities should be counted in our productivity-based system. Examples are many and include being available for call when not doing cases; receiving credit for assigned days when cases are canceled especially when an extensive preoperative evaluation has been performed. Such examples are numerous. We started with a “pure” productivity system, as defined by our specialty’s time-based reimbursement schedule. When temptation for rewarding these other activities (i.e., preoperative evaluation) with billable hours is granted, a system of productivity will gradually be transformed to a system of availability. The advantage of a “productivity” system will be lost.
Many measures of productivity other than billable hours could have been used (4). For example, the amount of money generated per faculty member for their clinical work could be used (4). The relative value guide allows more units to be charged for difficult cases rather than easy cases. It was our judgment that because our department does not dictate the type of cases that come to the hospital, we wanted to reward clinical activity on an equal basis, rather than according to the specialty and payer mix. Although this is the approach we used, a productivity system, such as described above, could be modified if others feel that these judgments are preferable for their departments.
In conclusion, we have described a “productivity” system used by our department for the past 5 years. This has allowed objective analysis of clinical productivity of faculty in a manner that both the hospital and surgeons respect. Although we fully recognize that many other measures of productivity could be utilized, as indicated by Abouleish et al. (4), this system is an example that has been quite effective for our department in an atmosphere in which medical schools and hospitals demand increasing accountability of resource utilization from all clinical departments.