Prevention interventions for HIV-infected patients seen in clinical settings have emerged as an effective strategy for reducing HIV transmission. A variety of prevention with positives (PWP) interventions has been shown to be effective in clinical settings, and these strategies hold promise to be a key component in a comprehensive response to HIV. In many cases, medical settings may be the only place where patients have convenient, consistent access to prevention services.1-3 An estimated 350,000 to 528,000 individuals with HIV use the clinical care delivery system and receive regular HIV care.4 Medical providers are in a strategic position to help prevent transmission of HIV by assessing their patients for risky sexual and needle-sharing behaviors and providing counseling or referrals to prevention services.
The Centers for Disease Control and Prevention's revised numbers of newly infected individuals increases the urgency for enhanced prevention with positives efforts. With a growing population of HIV-infected patients, the identification of better strategies for reducing HIV transmission is highly pertinent. This may be particularly true in the current era of heightened concern about efficient allocation of medical services. Decision-makers equipped with valid information on program costs and cost-effectiveness are in a better position to improve the value of HIV prevention spending. This article describes the results of a cost-effectiveness analysis from a 5-year initiative to design, deliver, and evaluate prevention interventions for HIV-infected patients seen in clinical settings. The initiative was sponsored by the Health Resources and Services Administration's Special Projects of National Significance (SPNS). Its purpose was to provide information on the feasibility, acceptability, and effectiveness of interventions deployed in real-world settings funded by the federal Ryan White Program.
In the article reporting outcomes from this project, we observed significant (P < 0.001) declines in self-reported sexual transmission risk among participants assigned to each of three different clinical interventions.5 Although the study outcomes are encouraging, if interventions are to be replicated, there must also be a favorable balance of benefits against the costs of achieving them. Cost-effectiveness analysis can be a powerful tool for comparing the efficiency of health interventions and thus determining where limited resources can generate levels of benefit that are competitive with other feasible uses of these funds. This article reports the cost-effectiveness of the demonstration sites and addresses the following questions: 1) what were the total and unit costs (cost per client served and cost per dose-minute of interaction with clients) over the 3 years of the demonstration project and how did costs vary across the three intervention types? 2) What was the cost effectiveness of the SPNS demonstration project considered as a whole? 3) What was the incremental cost-effectiveness among the three intervention types?
Secondarily, we sought to understand the specific cost elements that accounted for variations in unit cost and the relationship between program scale and unit costs.
The three intervention types are characterized in Table 1 and described in detail elsewhere.5,6 Briefly, primary care provider-delivered interventions involved brief risk assessments administered by computer to patients in private while they waited for their medical appointments. Primary care provider interventions were based on proven effective health behavior change theories that helped clinicians to identify the best points of intervention with brief counseling sessions for a particular patient. Specialist-delivered interventions were one-on-one client-oriented sessions, group session, or a combination. Individual sessions were led by either social workers or trained HIV-infected peer interventionists. Group sessions were usually coled by a social worker and peers. Interventions using both strategies, provider-delivered and specialist-delivered interventions, were classified as mixed. At each site, patients were individually randomized to an intervention group or to a control condition (“standard of care”) limited to risk assessment without specific prevention counseling.
At baseline, 15% (n = 1055) of participants assigned to receive the existing standard of care only, 20% (n = 768) assigned to clinical provider-delivered interventions, 17% (n = 975) of participants assigned to specialist-delivered interventions, and 25% (n = 758) of participants assigned to mixed interventions reported sexual transmission risk. Transmission risk was defined as unprotected sex with an HIV-uninfected or unknown HIV status partner in the past 6 months.
Overview of Methods
Using standard cost-effectiveness methods, all incremental costs associated with service delivery (ie, excluding research or evaluation costs) in each intervention were summed and compared with the outcomes for that type of intervention.7,8 Cost-effectiveness was evaluated only for those clients who received the intended intervention and was assessed from the perspective of the healthcare system.
Annual costs were tabulated directly from intervention expenditure records. The number of clients served in each year of the program was obtained from standardized reporting documents required by the Health Resources and Services Administration (available on request). Dividing program costs by the number of clients served yielded the efficiency of services production or cost per client. The changes in risky behavior resulting from the interventions were measured by three behavioral evaluations separated by 6-month intervals. We estimated the outcome of most interest, HIV infections averted, using a computer-based epidemic model of HIV transmission that reflects, based on the best available empiric data, the effect of changes in risk behavior on HIV transmission.9 It allows comparison of interventions with different risk reduction approaches but a shared goal of fewer HIV infections. Estimated cost-effectiveness was obtained by dividing program costs by the estimated number of HIV infections averted.
We developed a uniform, standard cost data collection protocol and accompanying manual for gathering expenditure data in each of the SPNS PWP sites. We worked in close consultation with staff in each of the 15 sites to complete this protocol for each of the 3 years of the demonstration period. Expenditures were classified in one of four categories: 1) personnel, including fringe benefits; 2) recurring supplies and services; 3) capital and equipment; and 4) facility space. In all cases, we identified the costs associated with the SPNS PWP activities and only these costs were included in the analysis. Capital expenditures such as computers and furniture were amortized over 5 years of expected useful life and assuming no salvage value. Building space was valued at the market rental rate for any space that had previously been used for another activity. We did not value previously unused space. Once expenditures were assigned to one of the four broad categories, site staff further allocated each expenditure item across four activity areas: 1) service delivery for PWP; 2) staff training directly related to service delivery; 3) activities unrelated to PWP including research unrelated to direct service activities; and 4) fixed costs consisting of intervention overhead and administration. A small portion of general administrative support could not readily be divided between PWP and non-PWP activities. These were allocated to PWP according to the proportion that PWP personnel costs constituted of total program personnel costs. The allocation was performed based on program expenditure records and on the senior staff members' knowledge of intervention operations. Preliminary results were reviewed at the University of California-San Francisco. Through meetings, phone conversations, and e-mail correspondence with the University of California-San Francisco evaluation team, the expenditure amounts and allocations were confirmed or revised. Unit costs for each site was obtained by dividing all costs over the 3 years by the number of clients served. Unit costs for each of the intervention types were obtained by dividing all costs of the programs of that type by all clients served by the sites of that type. Finally, the average cost per dose-minute of service was obtained by dividing the costs for Years 2 and 3 by the number of minutes of direct client contact provided during those 2 years. Dose-minute data were not available during the first year of the study.
We estimated the number of transmissions averted among participants in each intervention type using previously described methods.9 First, we estimated the probability of transmission of HIV for each participant at baseline and at 12 months of intervention follow-up using self-reported sexual risk behavior, self-reported use of antiretroviral therapy, and published transmission probabilities. For each study participant, (i) at each time point (j), we modeled the estimated number of HIV transmissions, Tij, to HIV-uninfected male, HIV-uninfected female, unknown status male, or unknown status female sex partners (k) using the following equation:
In this equation, n denotes the number of sexual partners of a particular gender and HIV status; a, b, c, and d denote the number of unprotected sex acts (insertive and receptive anal and vaginal intercourse for heterosexual male participants and receptive anal and vaginal intercourse for female participants) with individuals of a particular gender and HIV status, HIV status, and αa; and αb, αc, and αd represent the associated per-act transmission probabilities (0.0006 for unprotected receptive and vaginal intercourse, 0.001 for unprotected insertive vaginal intercourse, and 0.02 for unprotected insertive anal intercourse). Second, we averaged the probability of transmission of HIV at baseline and 12 months among intervention recipients and among control subjects at each site. Third, we summed the probability of HIV transmission at baseline and 12 months over the number of participants and control subjects in each intervention type. Fourth, for both participants and control subjects, we assessed the number of HIV transmissions averted at each site. This was done by subtracting the estimated number of transmissions at baseline from the estimated number of transmission at 12 months. We then subtracted the difference found in the control group from the difference found in the participants to arrive at the number of cases averted at each site.
We compared the cost-effectiveness of different intervention types in stages. First, we compared the cost-effectiveness of the least costly intervention type with the cost- effectiveness of standard of care. We then compared the cost-effectiveness of this least costly intervention with the more costly interventions to assess whether there is an incremental increase in effectiveness associated with an incremental increase in cost. Interventions that reported increased risky behavior were omitted from this site-level analysis because cost-effectiveness ratios cannot be assessed when “negative benefits” are present.
We conducted multivariate sensitivity analyses by varying the base case values, as reflected in Table 2, for program cost, effectiveness, and transmission rates. We documented how variations in input values affect cost-effectiveness. Using the lifetime cost of HIV treatment as a threshold, we identified the range of values of key inputs that are consistent with a finding of favorable cost-effectiveness.
In this section, we present results for program costs, unit cost, and cost-effectiveness followed by sensitivity analyses that document the robustness of the cost and cost-effectiveness results to variations in the values assigned to important model inputs.
Total Costs and Cost Structure by Intervention Type
Average expenditures over the full 3 years of the PWP interventions varied from $146,075 for the two clinical provider sites to $337,881 for the six prevention specialist sites and $268,911 for the five mixed interventions (Table 2). Total costs for all 13 sites over the 3 years were $3,663,995. Direct service provision constituted 29.8% of the total cost for clinical provider interventions, 63.4% for the specialist interventions, and 63.8% for the mixed interventions. Training was the largest cost component of the clinical provider sites, 45.7% of total costs, in contrast to only 17.3% and 11.6% of the totals for specialist and mixed, respectively. Table 3 displays the per-client costs by both major expenditure category (personnel, equipment, supplies, and rent) and by activity (service delivery, training, and overhead). Personnel costs for service delivery are $237 per client for provider sites or 24% of all costs. For specialist sites, per-client services delivery personnel costs were $1738 or 55% of the total. The equivalent figures for the mixed sites were $1838 and 54%.
Unit Costs and Economies of Scale
The average cost per client served was lowest in the clinical provider sites, $1004, $3173, and $3430, respectively, in the specialist and mixed sites (Table 2). We found that cost per client declined with service volume. The highest cost per client was found in a specialist site that served 37 clients and had a cost of $11,185 per client (Tucson). The clinical provider site in Baltimore had the largest caseload, 207 clients, and the lowest cost per client, $660. With each additional client, per-client expenditures decline by $37.60 (R2 for bivariate linear regression, 0.42). We also found evidence of economies of scale when the unit of output was defined as a dose-minute of direct client contact with a provider. Figure 1 shows the distribution of the cost per dose-minute by the average monthly dose-minutes delivered. All three service types exhibit declining costs with scale. Cost per dose-minute at the two clinical provider sites decline by over 50% as average monthly dose-minutes increase from 145 (Baltimore) to 431(Birmingham). Dose-minute costs at the specialist sites decline from $27.05 at an average of 324 dose-minutes per month (DeKalb County, GA) to $3.57 per dose-minute when volume is 1506 dose-minutes per month (Philadelphia).
Effectiveness and Cost-Effectiveness
The estimated reduction in transmission of HIV for each intervention type is presented in Table 2. Over the 3-year evaluation period, the two clinical provider interventions prevented 2.71 cases. The six specialist-led interventions prevented 1.11 cases; and the five mixed interventions prevented 3.02 cases of HIV. Per 100 clients served, the reductions in transmission were 0.47, 0.03, and 0.15 for clinical provider, specialist, and mixed sites, respectively. For direct comparison of effectiveness across the three intervention types, we adopted a standardized measure of effectiveness, namely HIV cases averted for the programs in each type per 100 clients served in each of the three program types. Effectiveness varied from 0.47 cases of HIV averted per 100 clients to 0.15 cases for the mixed provider programs and 0.03 for the specialist programs. Pooling costs and cases of HIV averted across all sites, we observed cost-effectiveness for the 13 SPNS sites of $535,782 per case of HIV averted compared with the standard of care. The clinical provider-delivered interventions were the most cost-effective with a cost-effectiveness of $107,656 per case averted when compared with the alternative of no intervention ($146,075/1.36 cases averted). The clinical provider sites also dominated (both more effective and less costly) the other two PWP intervention modalities in the incremental cost-effectiveness analysis. For this reason, no cost-effectiveness ratio is given (Table 2).
We used a threshold of $303,100 per case averted as a cost-effectiveness threshold. Discounted to the time of infection, this is the lifetime cost of treating HIV in the United States according to a 2006 multisite study.10 Thus, interventions costing less than this are not only cost-effective but also cost-saving assuming that people whose cases were prevented would otherwise have access to treatment.
Monte Carlo Simulation
A Monte Carlo simulation (Crystal Ball, Version 7.2, Oracle Coporation, Redwood Shores, CA) assessed the aggregate uncertainty from the three key inputs of this analysis, program costs, effectiveness, and the risk of HIV transmission as calculated from the reported behavioral change data. In the absence of information about the underlying distribution of these input values, beta distributions with maximum and minimum values set to 50% and 150% of the base case value were fit around each variable of interest. The alpha and beta parameters were set to 5, ensuring a symmetric distribution approximating the normal with the base case as the mean value.11 We also ran the simulation using uniform distributions, thus eliminating any assumptions about the central tendencies of the underling distributions.
With 50,000 trials, the cost-effectiveness of the clinical provider sites at the 80% confidence level varied from $79,852 to $147,482 using beta distributions for the three variables and from $58,507 to $206,809 using uniform distributions (see Table 4). In both cases, the high end of the range is well under the threshold of $303,100 (ie, favorable cost-effectiveness) at the 80% confidence level. Considering the average cost-effectiveness of all sites, cost-effectiveness ranges from $398,355 to $746,185 and from $291,925 to $1,053,255 for beta and uniform distributions, respectively (confidence interval, 80%). The low end of the range using uniform distributions is thus just on the favorable side of the $303,100 threshold.
Multivariable Threshold Analysis
Figure 2 displays the combination of program costs and transmission risks that are consistent with the threshold level of cost-effectiveness if program effectiveness for clinical provider sites were 50%, 100%, and 200% of the actual effectiveness estimates derived from our study. The areas above each of the cost-effectiveness frontier lines represent the combinations of transmission levels and costs that are consistent with cost-effectiveness. The blue diamond indicates the base case for this analysis. Thus, even if program effectiveness was only 50% of that found in our study, costs could also rise by almost 50% before the clinical provider interventions stopped to be cost-effective compared with no intervention.
Scenario Analysis Using Only Sites Showing Benefit
Four sites exhibited increased risky behavior and thus had “negative benefits.” Two of these were specialist and two were mixed sites. If these sites are disregarded, the incremental cost-effectiveness ratio of mixed versus provider is $979,936 per case averted. In this scenario, both provider and mixed dominate specialist.
In this study of behavior change interventions with HIV-infected patients, clinical sites offering clinical provider-delivered interventions were more cost-effective in reducing sexual HIV transmission risk than sites featuring HIV prevention services delivered by prevention specialists or interventions that included a mix of providers and specialists. Based on self-reported sexual behavior, the two provider-delivered intervention sites prevented an average of 1.36 HIV transmissions at an average annual cost of $146,075 or $107,656 per HIV case averted. The unfavorable cost-effectiveness of the specialist and mixed sites are attributable primarily to the relatively low average effectiveness of the services at these sites, which included four that reported an increase in risky behavior of service recipients relative to controls receiving standard of care.
The four sites that showed an increase in risky behavior represent those with the most disenfranchised patient populations (ie, highest proportion of patient from minority populations). It is possible that patients within these sites were less likely to disclose transmission risk behavior at baseline and more likely to disclose risk behavior after participating in the intervention. This increased comfort among those who participated in the intervention may have led to bias in the observed results. Although we cannot be sure of the cause of these unexpected results, they do not substantively affect our findings. As shown in the scenario analysis, provider sites maintain a large cost-effectiveness advantage over the other two modalities even if the two specialist and two mixed sites with negative benefits are omitted from the calculation.
The cost per client served averaged $1004, $3201, and $3468 for clinical provider, specialist, and mixed provider-type sites, respectively. Clinical provider sites were thus the least costly per client served and cost the least overall. Provider sites also devoted the smallest portion of their expenditures for direct service provision.
There are other threshold values for cost-effectiveness that might be used instead of the cost of HIV treatment. A 2004 review of the cost-effectiveness of a large variety of HIV prevention intervention types found a wide range of program costs per case of HIV averted.12 Within counseling interventions, discordant couples counseling has a cost-effectiveness of $16,000 per case averted, whereas standard counseling and testing with both HIV-negative and HIV-positive clients costs $210,000 per case averted. Other standard interventions evaluated in this review included condom availability at $22,000 per case averted and community mobilization of high-risk populations (Mpowerment) with a cost of $12,000 per case averted. Group counseling (multiple sessions) cost $170,000 per case averted in one study and $320,000 in another. At $107,656 per case averted, the SPNS clinical provider intervention sites are well within this range and should thus be considered cost-effective in a comparison with other accepted prevention options.
Over the range of caseloads we observed, larger scale was associated with lower unit costs as measured both by the cost per client served and the cost per dose-minute of provider-client interaction. Our study was able to offer no evidence regarding the relationship between scale and cost-effectiveness. Only nine of the 13 sites could be assessed individually for cost-effectiveness. In addition, the Tucson site, which had a highly unfavorable cost-effectiveness ratio as a result of the small observed decrease in transmission risk, was solely responsible for the observed scale effect. A more definitive specification of the relationship between scale and cost-effectiveness awaits a study with a sufficient number of sites to support a multivariate analysis of the contribution of scale to efficiency.
This study had a number of limitations. Because the site was the unit of analysis, our sample was too small to yield definitive results. Of the original 15 sites, 13 yielded reliable cost and outcome data. Of these 13, only two were clinical provider sites. It was therefore impossible to conduct statistical tests of the difference in cost-effectiveness among the three types of PWP interventions. A second limitation is that assessing the cost of PWP activities required the allocation of expenditures across the categories, direct services, training, research, and administration/overhead. These allocations were not based on standard or pre-existing accounting templates and therefore required personal judgment by staff members. Although we reviewed and discussed the allocations and their rationales carefully, this method is imperfect. However, most of the potential misallocations do not affect our primary results. For example, if some of the resources classified as training ought to have been considered direct service, this affects the distribution of the amounts shown in Tables 2 and 3 but not the total cost. Only the total is used to calculate the cost-effectiveness ratios. Misallocations between PWP and non-PWP activities at the same sites would affect the accuracy of our results, but these are unlikely to have occurred to a significant degree because accounting reports were required by Health Resources and Services Administration to track PWP expenditures and because intervention staff can readily distinguish between these two types of activities. Third, estimates of intervention effect are based on self-reported changes in behavior. Although these methods are standard in low-prevalence settings, they contain potential for social desirability bias, which may inflate the estimates of intervention benefit.13 However, we conservatively assumed no intervention benefits extending beyond the 3 years of the intervention's life and this is a potential countervailing bias. Fourth, we assume that all averted infections are truly averted, not merely postponed. Estimating the portion of cases that are postponed is rarely done in the assessment of HIV interventions. Obtaining a precise estimate requires a number of assumptions about the evolution of partners' risk profiles. However, a rough assessment, based on baseline HIV incidence of 0.0096 per year in the partners of the HIV-infected clients, the recurrence of nominally averted infections would be modest over 5 years, 4.7%, ie, (1-0.0096)^5, or 4.1% assuming uniform risk over the 5 years and discounted at 5% per annum. Finally, estimates of risk reduction are limited to patients who participated in the interventions and do not estimate the effects on the community. The clinical provider-delivered interventions were successful in reaching a large number of patients, so the likelihood of community-level effects is significant. If so, our reported results underestimate the effectiveness and cost-effectiveness of the clinical provider sites.
The clinical provider sites we studied are both less expensive and more effective than either of the two alternatives we studied. Most of their cost advantage derives from lower personnel outlays for direct service provision. Provider sites spend an average of $237 per client on personnel for direct services, much lower than the $1738 and $1838 for the specialist and mixed sites, respectively (Table 3). Although medically trained providers are paid more than counselors and other HIV prevention specialists, the clinical provider sites use computers to help make assessments of clients' needs. This requires none of their costly time. More importantly, the staff at clinical provider sites spends less time with clients, 4 minutes on average each month, compared with 78 minutes and 40 minutes per client per month for the specialist and mixed sites, respectively. Thus, although dose-minutes at provider sites are more expensive $17.46, compared with $7.37 and $14.42 for specialist and mixed sites, respectively, the overall cost per client served is lower at the clinical provider sites.
Among the sites we studied, it appears that the clinical provider sites do not sacrifice effectiveness in exchange for the shorter duration of time spent in direct client contact. On the contrary, the clinical provider sites averted 0.47 cases of HIV per 100 clients served versus 0.03 and 0.15 for specialist and mixed sites (Table 2). It thus appears that a unit of a provider's time is associated with a greater reduction in risky behavior than a unit of direct services in the other two types of settings. Based on our findings, the provider approach appears to be the most cost-effective of the three intervention types we assessed. It is also cost-effective when compared with the cost of HIV treatment and with the cost-effectiveness of other widely adopted HIV prevention interventions. Based on these results, the provider model would appear to be a good candidate for additional research and intervention support. Although we make limited claims on what can be concluded from one study that included only two clinical provider sites, the finding reported here is potentially important. If repeated elsewhere, program managers might consider emphasizing clinical provider-based services over nonclinical provider-based prevention for HIV-positive clients.
We appreciate the assistance of Dr. James G. Kahn, University of California-San Francisco, for his helpful consultation on epidemiologic questions that arose in the course of this analysis.