The effective monitoring of treatment use is required to maintain the life-saving benefits of antiretroviral therapy (ART). These benefits are currently threatened by a range of issues associated with imperfect adherence, virologic failure, and acquired drug resistance.1–3 Failure on first-line ART results in the switching of patients to a more expensive and toxic regimen, which increases the probability of subsequent virologic failure and limits future treatment options.4 Public health care strategies to correctly diagnose treatment failure will play an important role in maintaining the success of HIV programs in resource-limited settings.
The standard strategies to monitor patient response to ART include 2 laboratory tests, CD4+ cell (CD4) count and HIV-1 RNA viral load (VL) count. The World Health Organization (WHO) guidelines recommend VL testing to provide a more accurate indication of treatment failure.5 However, some HIV programs in resource-limited settings still exclusively implement immunologic (CD4) monitoring on the basis of its perceived affordability.6 This decision will have important implications for the monitoring of patients on ART because a number of clinical factors can affect the ability of CD4 tests to correctly diagnose treatment failure.7–16
We hypothesize that immunologic monitoring will be a less affordable strategy than its virologic counterpart. This is because the inferior diagnostic performance of immunologic monitoring will result in more patients being incorrectly switched to expensive second-line regimens. In most resource-limited settings, the absence of drug resistance testing makes it difficult to determine whether a correct regimen switch has been made in the presence of treatment failure. We were able to evaluate our hypothesis using data from a large cohort of South African patients (N = 4177) who were sent for drug resistance testing (n = 480). Using a sensitivity analysis, we could then evaluate the diagnostic performance of immunologic and virologic monitoring to identify treatment failure with the need for a second-line regimen switch. We used the results of the sensitivity analysis to calculate the US dollar cost to make 1 correct regimen switch for both monitoring strategies.
Study Setting and Design
We used data from a longitudinal cohort study enrolling patients from the Hlabisa HIV Treatment and Care Programme between January 2006 and March 2014. The program is implemented in 17 primary health care clinics and 1 district hospital in the northern KwaZulu-Natal province of South Africa. It offers dual CD4/VL monitoring and distributes ART free of charge to HIV-infected patients using WHO treatment guidelines.17 Our study included 4177 adult patients (≥18 years) who were on a first-line ART regimen for at least 6 months with 2 or more CD4/VL count measurements. CD4 tests were scheduled every 6 months. VL tests were scheduled at months 6 and 12 and then every 12 months if VL <400 copies per milliliter or repeated after 3 months if VL >1000 copies per milliliter.18 Before 2010, patients were initiated on first-line ART regimens consisting of stavudine, lamivudine, and either efavirenz or nevirapine. In 2010, tenofovir replaced stavudine. A drug resistance cohort study is nested within the Hlabisa program. The Hlabisa program, drug resistance cohort, and demographic characteristics of the study setting are presented in greater detail elsewhere.19–21
Our aim was to evaluate the accuracy of patient CD4+ (cells/μL) count and VL (log10 copies/mL) count to diagnose treatment failure with the need for a second-line regimen switch. We first selected test measurements between the patient's most recent pre-ART date (baseline) and last clinic visit date (right censorship). Using this information, we next obtained a CD4 count slope and a VL count slope for each patient to assess their immunologic and virologic response to treatment over time. We then used the predicted values from each patient's CD4 slope to compute their relative percentage change in absolute CD4 count over the last 6 months. The absolute change in CD4 count is abbreviated as %ΔCD4. Similarly, we used the predicted values from each patient's VL slope to calculate their absolute change in log10VL count over the last 6 months. The absolute change in log10VL count is abbreviated as ΔVL (see Supplemental Digital Content, Section 1, http://links.lww.com/QAI/A762).
We used the %ΔCD4 and ΔVL values to create a qualitative measure of a high, medium, low, or very low need for a second-line regimen switch. Specifically, patients with a ΔCD4 <0% were described as having a high need for a regimen switch, ΔCD4 0.1%–5.0% as having a medium need, ΔCD4 5.1%–20.0% as having a low need, and ΔCD4 >20.0% as having a very low need. For example, a patient with a CD4+ count of 180 cells per microliter at their most recent clinic visit, and a CD4+ count of 360 cells per microliter 6 months before, would be diagnosed as having a high need given a ΔCD4 of −50%. Similarly, we used VL cut-points of >0.3, 0.01–0.3, −0.3 to 0.0, and <−0.3 log10 copies per milliliter, respectively, to classify a high, medium, low, and very low need for a second-line regimen switch (see Supplemental Digital Content, Section 2, http://links.lww.com/QAI/A762).
We wanted to evaluate how accurately this qualitative measure of need (high, medium, low, or very low) could diagnose the true need for a second-line regimen switch. To determine true need, we identified and sent all patients with their 2 latest VL >1000 copies per milliliter for a genotypic resistance test. We then used a Rega 188.8.131.52 algorithm to obtain a genotypic susceptibility score (GSS) for each antiretroviral agent in the first-line regimen, with a total GSS <2 indicating drug resistance (see Supplemental Digital Content, Section 3, http://links.lww.com/QAI/A762). We defined the outcome of this study as a drug resistance result with the true need for a second-line regimen switch (or drug susceptibility on the first-line regimen otherwise).
We next asked how many drug resistance cases would be correctly identified if all high-need patients (threshold I) or if all high- and medium-need patients (threshold II) or if all high-, medium-, and low-need patients (threshold III) received a diagnoses of a regimen switch. For each threshold, we used a Receiver Operating Characteristics analysis to calculate the sensitivity, specificity, the false-positive rate (1 − specificity), and the positive predictive value. We also calculated a measure for the number of patients that need to be tested (NNT) to make 1 correct regimen switch. We further used survival analysis methods to model the time to treatment failure with drug resistance conditional on a high, medium, low, or very low need for a second-line regimen switch (see Supplemental Digital Content, Section 4, http://links.lww.com/QAI/A762).
We then used the results from the sensitivity analysis to derive the dollar cost for each threshold and monitoring strategy. We first calculated a baseline cost to make 1 correct regimen switch, which was obtained by multiplying the NNT with the price of a CD4 (US $9.18) or VL (US $45.88) test.22 We also calculated the cost of incorrectly switching patients from a first-line regimen (US $146.50/year) to a second-line regimen (US $465.50/year) for the duration of 1 year.23 These 2 amounts were added to give the US dollar cost to make 1 correct switch to a second-line regimen. The full costing model is described in Section 5 of the Supplemental Digital Content (http://links.lww.com/QAI/A762). Stata version 12.1 was used for the analysis.
The study was approved by the Biomedical Research Ethics Committee of the University of KwaZulu-Natal and the Health Research Committee of the KwaZulu-Natal Department of Health. Written informed consent was obtained from all the study participants.
Our final analytic sample consisted of 4177 patients (≥18 years) with a mean (SD) age of 41 (±10.4) years. Of these patients, 25.8% (n = 1078) were men. The mean (SD) duration of ART exposure was 51.5 (±23.4) months, and the mean (SD) time between clinic visit dates was 7.87 (±2.90) months. The median for the patient-specific CD4 slopes was 7.1 (interquartile range, 3.8–11.9) cells per microliter change per month, and the median for the VL slopes was −0.04 (interquartile range, −0.07 to 0.00) log10 copies per milliliter change per month for the whole cohort (see Figure S2, Supplemental Digital Content, http://links.lww.com/QAI/A762).
There were 480 of the 4177 (11%) patients who were identified to have virologic failure and sent for a genotype test. Of these, 396 (83%) patients had drug resistance with a GSS <2, and 84 (17%) patients had treatment failure without drug resistance. Virologic suppression was determined by the 2 most recent VL measurements <400 copies per milliliter (n = 3308) or undetectable VL <40 copies per milliliter at the most recent clinic visit date (n = 389). Patients whose virologic failure status could not be definitively determined (n = 641) were not included in the final sample (see Figure S1, Supplemental Digital Content, http://links.lww.com/QAI/A762). For those patients with drug resistance, the mean (SD) number of CD4 measurements was 6.3 (±2.9) and 5.9 (±2.9) VL measurements, compared with 6.2 (±2.9) CD4 and 5.4 (±3.2) VL measurements for those without. There was no difference in the duration of time on ART for patients with and without drug resistance. In Figure 1 we show the time to drug resistance for patients diagnosed with a high, medium, low, or very low need for a regimen switch.
We show the diagnostic accuracy and cost-effectiveness for each threshold and monitoring strategy in Table 1. Under virologic monitoring, for example, 295 of the 396 patients who had a high and medium need for a regimen switch (ΔVL >0.0 log10 copies/mL) were correctly identified to have drug resistance (giving a sensitivity of 74.5%), and 3568 of the 3781 patients below this threshold were correctly identified to have a drug susceptible status (giving a specificity of 94.4%; see Table S2, Supplemental Digital Content, http://links.lww.com/QAI/A762). If all high- and medium-need patients were (correctly or incorrectly) diagnosed for a regimen switch, then 295 of these 508 patients would be correctly identified to have drug resistance, giving a positive predictive value of 58.1% and a NNT of 1.7.
Table 1 shows that it is more affordable to make 1 correct regimen switch in high-need patients (threshold I) under immunologic monitoring (CD4 = US $77.4; VL = US $146.2); however, approximately 65% of all patients who truly need a regimen switch would be missed for both strategies (CD4 = 65.7%; VL = 67.9%). The percentage of missed regimen switches would be considerably reduced to below 25.5% for all high- and medium-need patients under threshold II (CD4 = 20.7%; VL = 25.5%). At this threshold, a higher percentage of patients would be incorrectly switched to a second-line regimen under immunologic monitoring (CD4 = 29.0%; VL = 5.6%). Virologic monitoring would then become significantly more affordable to make 1 correct regimen switch (CD4 = US $498.9; VL = US $186.4). We show a similar cost saving for virologic monitoring to make 1 correct regimen switch in high-, medium-, and low-need patients (CD4 = US $3031.0; VL = US $1828.8). The reduction in cost is again primarily because of a superior specificity (CD4 = 8.8%; VL = 43.2%) that reduces unnecessary second-line switching for virologic monitoring under threshold III.
In this study, we show that the superior accuracy of virologic monitoring reduces the number of patients incorrectly switched to more expensive second-line regimens. As a result, this strategy is substantially more affordable than immunologic monitoring. For example, CD4 testing would be more than double the cost to make 1 correct regimen switch for patients diagnosed to have a high and medium need for a regimen switch (CD4 = US $499; VL = US $186). Or, CD4 testing would be over one and a half times the cost to make 1 correct regimen switch for patients diagnosed to have a high, medium, and low need for a regimen switch (CD4 = US $3031; VL = US $1829). Our findings challenge the perception that exclusive CD4 testing can reduce the costs of treatment monitoring in resource-limited settings.
Recent WHO guidelines recommend the use of virologic monitoring to provide an early and more accurate indication of treatment failure.11 We align the conclusions of our study with these guidelines. Furthermore, we confirm the results of a previous simulation study which predicted significant cost savings because of the superior accuracy of VL testing.24 Our study benefits from the use of real-world data collected from a large cohort of patients undergoing HIV drug resistance testing. We make an important and significant contribution to the literature by showing that the affordability of a treatment monitoring strategy is a function of its diagnostic accuracy. This article is relevant for clinical policy makers because we suggest that virologic monitoring be preferred over immunologic monitoring once a patient is initiated on ART.
1. Bärnighausen T, Tanser F, Herbst K, et al.. Structural barriers to antiretroviral treatment: a study using population-based CD4 cell count and linked antiretroviral treatment programme data. Lancet. 2013;382(suppl 2):S5.
2. Ford N, Darder M, Spelman T, et al.. Early adherence to antiretroviral medication as a predictor of long-term HIV virological suppression: five-year follow up of an observational cohort. PLoS One. 2010;5:e10460.
3. Hamers RL, Kityo C, Lange JM, et al.. Global threat from drug resistant HIV in sub-Saharan Africa. BMJ. 2012;344:e4159.
4. Bartlett JA, Shao JF. Successes, challenges, and limitations of current antiretroviral therapy in low-income and middle-income countries. Lancet Infec Dis. 2009;9:637–649.
5. WHO. Consolidated Guidelines on the Use of Antiretroviral Drugs for Treating and Preventing HIV Infection: Recommendations for a Public Health Approach. Geneva, Switzerland: World Health Organization; 2013.
6. Roberts T, Bygrave H, Fajardo E, et al.. Challenges and opportunities for the implementation of virological testing in resource-limited settings. J Int AIDS Soc. 2012;15:17324.
7. Gupta RK, Hill A, Sawyer AW, et al.. Virological monitoring and resistance to first-line highly active antiretroviral therapy in adults infected with HIV-1 treated under WHO guidelines: a systematic review and meta-analysis. Lancet Infec Dis. 2009;9:409–417.
8. Reynolds SJ, Sendagire H, Newell K, et al.. Virologic versus immunologic monitoring and the rate of accumulated genotypic resistance to first-line antiretroviral drugs in Uganda. BMC Infec Dis. 2012;12:381.
9. Rawizza HE, Chaplin B, Meloni ST, et al.. Immunologic criteria are poor predictors of virologic outcome: implications for HIV treatment monitoring in resource-limited settings. Clin Infect Dis. 2011;53:1283–1290.
10. Reynolds SJ, Nakigozi G, Newell K, et al.. Failure of immunologic criteria to appropriately identify antiretroviral treatment failure in Uganda. AIDS. 2009;23:697–700.
11. Castelnuovo B, Kiragga A, Schaefer P, et al.. High rate of misclassification of treatment failure based on WHO immunological criteria. AIDS. 2009;23:1295–1296.
12. Sigaloff KCE, Hamers RL, Wallis CL, et al.. Unnecessary antiretroviral treatment switches and accumulation of HIV resistance mutations; two arguments for viral load monitoring in Africa. J Acquir Immune Defic Syndr. 2011;58:23–31.
13. Sax PE, Boswell SL, White-Guthro M, et al.. Potential clinical implications of interlaboratory variability in CD4+ T-lymphocyte counts of patients infected with human immunodeficiency virus. Clin Infect Dis. 1995;21:1121–1125.
14. Cingolani A, Lepri AC, Castagna A, et al.. Impaired CD4 T-cell count response to combined antiretroviral therapy in antiretroviral-naive HIV-infected patients presenting with tuberculosis as AIDS-defining condition. Clin Infect Dis. 2012;54:853–861.
15. Schomaker M, Egger M, Maskew M, et al.. Immune recovery after starting ART in HIV-infected patients presenting and not presenting with tuberculosis in South Africa
. J Acquir Immune Defic Syndr. 2013;63:142.
16. Ekouevi DK, Inwoley A, Tonwe-Gold B, et al.. Variation of CD4 count and percentage during pregnancy and after delivery: implications for HAART initiation in resource-limited settings. AIDS Res Hum Retroviruses. 2007;23:1469–1474.
17. Lessells RJ, Mutemwa R, Iwuji C, et al.. Reduction in early mortality on antiretroviral therapy for adults in rural South Africa
since change in CD4+ cell count eligibility criteria. J Acquir Immune Defic Syndr. 2013;65:e17–e24.
18. Department of Health. The South African Antiretroviral Treatment Guidelines. Pretoria, South Africa
: National Department of Health; 2013.
19. Manasa J, Lessells RJ, Skingsley A, et al.. High-levels of acquired drug resistance in adult patients failing first-line antiretroviral therapy in a rural HIV treatment programme in KwaZulu-Natal, South Africa
. PLoS One. 2013;8:e72152.
20. Houlihan CF, Bland RM, Mutevedzi PC, et al.. Cohort profile: hlabisa HIV treatment and care programme. Int J Epidemiol. 2011;40:318–326.
21. Vandormael A, Newell ML, Bärnighausen T, et al.. Use of antiretroviral therapy in households and risk of HIV acquisition in rural KwaZulu-Natal, South Africa
, 2004–12: a prospective cohort study. Lancet Glob Health. 2014;2:e209–e215.
22. National Health Laboratory Service. Laboratory Users Handbook. Pretoria, South Africa
: Tshwane Academic Division; 2013.
23. Estill J, Egger M, Blaser N, et al.. Cost-effectiveness of point-of-care viral load monitoring of ART in resource-limited settings: mathematical modelling study. AIDS. 2013;27:1483–1492.
24. Hamers RL, Sawyer AW, Tuohy M, et al.. Cost-effectiveness of laboratory monitoring for management of HIV treatment in sub-Saharan Africa: a model-based analysis. AIDS. 2012;26:1663–1672.