Since 2002, the number of HIV-positive people receiving antiretroviral therapy (ART) in low- and middle-income countries has increased dramatically, from 300,000 in 2002 to 10 million by the end of 2012, representing two thirds of the United Nations target of 15 million people on ART by 2015.1 The massive scale-up of ART also increased the number of patients experiencing treatment failure, the need for more expensive second-line regimens, and levels of viral resistance.2,32,3
Clinical and laboratory monitoring of patients on ART aims to maximize the durability of first-line regimens. In high-income countries, plasma HIV 1-RNA viral load (VL) and CD4-positive T-cell count (CD4 count) are regularly measured, and tests are performed when drug resistance is suspected.4 In resource-limited settings, monitoring of ART is, however, still generally based on CD4 counts and signs and symptoms. The accuracy of the criteria proposed by the World Health Organization (WHO)5 to detect virologic failure based on CD4 count and clinical criteria is poor: the positive predictive value (PPV) and sensitivity are below 50%.6,76,7 Patients with suppressed viral replication may thus unnecessarily be switched to second-line ART, and patients who fail therapy will switch late or not switch at all.8 The 2013 WHO consolidated guidelines on the use of antiretroviral drugs for treating and preventing HIV infection and the March 2014 supplement recommend routine VL monitoring but recognize that scaling up VL testing in resource-limited settings will be challenging.9,109,10
In settings where VL is not monitored routinely, the first priority should be to confirm virologic failure in patients in whom treatment failure is suspected, based on CD4 count and clinical monitoring.10 Targeted VL testing of selected patients based on CD4 count and other criteria is promising in this situation: only relatively few patients have to be tested, thus reducing costs compared to routine VL monitoring.11 We developed and validated risk charts based on current and past CD4 counts and decision rules to guide targeted VL testing.
The International epidemiologic Databases to Evaluate AIDS in Southern Africa (IeDEA-SA) is a regional collaboration of HIV treatment and care programs, which is part of a consortium of 7 networks in sub-Saharan Africa, Asia and Pacific, North America, and Caribbean, Central and South America.12–1412–1412–14 Data are collected at ART initiation and each follow-up visit, using standardized instruments, and transferred in regular intervals to data centers in Switzerland and South Africa. Ethical approval was obtained from the ethics committee of the Canton of Bern, Switzerland and the University of Cape Town, South Africa. All participating cohorts obtained local ethical committee approval to contribute data to this analysis.
We developed the risk charts using 7 South African cohorts: the Gugulethu and Khayelitsha township ART programs and Tygerberg hospital in Cape Town,15–1715–1715–17 the McCord Hospital in Durban,18 the Helen Joseph Hospital Themba Lethu Clinic and Aurum Institute for Health Research program in Johannesburg,19,2019,20 and the Hlabisa HIV Treatment and Care program in rural Somkhele, KwaZulu-Natal.21 We describe the 7 cohorts, which mainly include urban and township populations, as the derivation dataset. We validated the risk charts in the South African Kheth'Impilo cohort, which includes health facilities from urban and rural areas in the Eastern Cape, KwaZulu-Natal, and Mpumalanga,22 the Centre for Infectious Diseases and Research in Zambia (CIDRZ), which covers urban and periurban populations in Lusaka,23 and the TREAT Asia HIV Observational Database (TAHOD) in the Asia-Pacific.24
In South Africa, all cohorts monitored VL and CD4 cell counts 6 monthly. Similarly, from TAHOD, we included 17 sites in 12 countries that routinely monitored VL and CD4 counts 3–6 monthly. In CIDRZ, monitoring of CD4 cell counts occurs every 3–6 months, and VL is measured in patients suspected of failing therapy.
Inclusion Criteria, Definitions, and Imputations of Missing Values
We included treatment-naive patients aged 16 years or older who started first-line ART in 2000 or later with a CD4 cell count of 350 cells per microliter or lower. Patients needed to have at least one VL measurement and one CD4 count 6 months or later after starting ART. The CD4 cell count at the start of ART was defined as the measurement closest to the date of starting ART, within a window of 90 days prior to 30 days after the start of ART. We defined virologic failure as a single VL above 1000 copies per milliliter. We included measurements taken up to 5 years after starting ART. In both the derivation and validation cohorts, we imputed values missing between 2 measurements by interpolating values on the log10 scale for VL and the square root scale for CD4 count. Measurements taken after switching to second-line ART were excluded.
Development of Risk Charts and Rules for Targeted VL Testing
We used generalized additive models25 with a logit link and thin-plate regression splines26 with a monotonicity constraint to model the probability of virologic failure and develop the risk charts. In model 1, we included the current CD4 count, the CD4 count at start of ART, time on treatment, and gender. In model 2, the CD4 count at ART initiation was replaced by a count measured 6 months earlier, within a window of 2 and 9 months earlier. Most patients contributed multiple measurements during follow-up; these were treated as independent. Models included smoothers for the current CD4 count, time on treatment, and age. We developed optimal tripartite decision rules to support decisions on VL testing in settings where access to VL monitoring is limited, using a method developed by Liu et al.27 A tripartite decision rule is defined by 2 cutoff values that classify treatment outcomes into 3 categories, based on the predicted probability of virologic failure: successful ART, virologic failure, and uncertain outcome. VL is then measured in patients with uncertain outcome. We developed decision rules assuming that resources allow for VL testing of 10%, 20%, or 40% of patients. The cutoffs were then chosen, such that the 10%, 20%, or 40% of patients with the most uncertain outcome are tested. We also determined the optimal cutoff in the absence of VL testing. In all rules, we gave more weight to avoiding false negatives (60%) than to avoiding false positives (40%). In other words, we assumed that it is more important to avoid missing patients who truly failed than to avoid falsely classifying patients as failing treatment.
Validation of Risk Charts and Model Fit
We calculated PPVs, negative predictive values (NPVs), sensitivity, specificity, and the area under the receiver operating curve (AUC) for the derivation and validation cohorts. We checked the goodness of fit of the 2 models by graphically comparing observed and predicted risks. We overlaid the plots with a grid of 15 × 15 cells and compared the proportion of failures encountered within each of the 225 cells with the predicted number of failures. Finally, we compared the performance of the risk charts with the 2006 and 2013 WHO immunologic criteria for treatment failure.5,95,9 The 2006 criteria include a fall of the CD4 count to baseline (or below), a 50% fall from the on-treatment peak value and persistent CD4 counts below 100 cells per microliter. The 2013 criteria are a simplified version of the 2006 criteria that do not include the fall from the on-treatment peak value.
We performed 3 sensitivity analyses. The first was a complete case analysis for which we did not impute any missing values. In the second sensitivity analysis, we used an alternative imputation method where we added random error to interpolated values. Finally, in the third sensitivity analysis, we examined the impact of assuming that multiple measurements in the same patient are independent. We weighted each measurement such that the weights of all measurements of one patient added up to 1. Every patient thus contributed the same weight to the analysis. For further details, see the technical appendix (see Supplemental Digital Content, http://links.lww.com/QAI/A712). All analyses were performed in R 3.0.1 (R Core Team, Vienna, Austria).
Selection of Eligible Patients
After excluding patients with missing or ineligible CD4 counts at the start of ART, 31,450 patients from the South African derivation cohort, 16,131 patients from the South African, 7796 patients from the Zambian, and 1356 from the Asia-Pacific validation cohorts were included in the development and validation of the risk charts based on model 1. Numbers were different for the second risk chart (model 2), which was based on current and CD4 counts measured 6 months previously: 36,511 patients from the derivation cohort in South Africa and 12,909 patients from the South African, 2854 patients from the Zambian, and 1367 patients from the Asia-Pacific validation cohorts. The selection of patients with reasons for exclusion is shown in Figure S1 (see Supplemental Digital Content, http://links.lww.com/QAI/A712).
The 31,450 patients starting ART in 1 of the 7 South African programs had a median age of 36 years, were predominantly female (18,597; 59%), and started ART with a median CD4 cell count of 111 cells per microliter (Table 1). The characteristics of the patients included in the South African and Zambian validation cohorts were similar to the derivation cohort, whereas in the cohorts from Asia-Pacific, most patients were men (918; 68%) and the median CD4 cell count at start of ART was lower (95 cells/μL). For model 2, patient characteristics were similar (see Table S1, Supplemental Digital Content, http://links.lww.com/QAI/A712).
The development of model 1 was based on 125,590 triplets of laboratory values: the CD4 count measured at the start of ART and one CD4 count and VL measured subsequently at the same time during a total 68,611 person-years of follow-up. The validation datasets were based on 46,997 triplets measured during 32,005 person-years of follow-up (South Africa), 16,652 triplets measured during 19,951 person-years (Zambia), and 8498 triplets taken during 4375 person-years (Asia-Pacific). In the derivation cohorts, 11,972 (10%) of VL values and 12,045 (10%) of CD4 counts had been imputed by interpolation. Compared to the derivation cohorts, the proportion of imputed values was greater in the validation cohorts from South Africa and Zambia, and greater for VL in Asia-Pacific (Table 1). The numbers were similar for model 2 (see Table S1, Supplemental Digital Content, http://links.lww.com/QAI/A712).
The risk chart for virologic failure based on model 1 is shown in Figure 1, stratified by CD4 count at the start of ART and gender. Low probabilities of virologic failures are shown in blue, intermediate probabilities in yellow and orange, and high probabilities in red. At a given combination of current and CD4 count at the start of ART, the probability of virologic failure increases with time on ART and is somewhat lower in women than in men. The optimal probability areas where patients should be tested if resources allow the testing of 10%, 20%, or 40% of patients are also shown. The range of patients to be tested widens with duration on ART, reflecting increasing uncertainty. Figure 2 shows the risk chart for model 2, stratified by gender and CD4 count measured 6 months previously. Again, at a given combination of current and previous CD4 count, the probability of virologic failure increases with time on ART and is lower in women than in men. Alternative presentations of the 2 risk charts are given in Figure S2 and Figure S3 (see Supplemental Digital Content, http://links.lww.com/QAI/A712).
Accuracy of Charts and Targeted VL Testing
The range of probabilities resulting in the testing of 10%, 20%, and 40% of patients in the derivation cohort slightly differed between models. For example, when assuming that 20% of patients can be tested, this range was 0.22–0.64 for model 1 and 0.20–0.67 for model 2. With model 1, the PPV increased from 61% to 87%, 94%, and 98% in the South African derivation cohort when moving from no VL testing to the testing of 10%, 20%, and 40% of patients (Table 2). The PPVs for the South African validation cohort increased from 48% (no testing) to 79% (10% tested), 91% (20% tested), and 98% (40% tested). The corresponding PPVs for Zambia were 35%, 64%, 80%, and 93%, and for Asia-Pacific 37%, 73%, 88%, and 96%. NPVs were close to 90% in all cohorts even without targeted VL testing and generally above 90% with targeted testing, except for the South African validation cohort with no testing (81%) and testing of 10% and 20% of patients (84%; 87%). Sensitivities increased from 33% (no VL testing) to 74% (40% tested) in the derivation cohort and, in the validation cohorts, from 24% (no VL testing) to 68% (40% tested) in South Africa, from 43% to 82% in Zambia, and from 25% to 71% in the Asia-Pacific cohorts (Table 2). The AUCs ranged from 0.63 using model 2 in the Zambian validation cohort without targeted VL testing to 0.95 in the South African derivation cohorts when using model 1 and assuming that 40% of patients had VL tests (Fig. 3).
Table S2 (see Supplemental Digital Content, http://links.lww.com/QAI/A712) gives PPVs, NPVs, sensitivity, and specificity for different threshold probabilities for virologic failure, assuming that targeted VL testing is not available. As expected, the PPV increased with higher thresholds, whereas sensitivity declined. The comparison with the WHO criteria for immunological failure showed that in the absence of targeted VL monitoring, the performance of the risk charts and the different WHO criteria was similar (see Table S3, Supplemental Digital Content, http://links.lww.com/QAI/A712). The goodness of fit of the 2 models, as assessed by comparing observed and predicted risks, was generally high (for details, see Technical Appendix, Supplemental Digital Content, http://links.lww.com/QAI/A712).
The results of the complete case analysis were similar to the main analysis, with only small differences in the accuracy of predictions, typically in the range of plus or minus 0%–5% in PPV, NPV, sensitivity, or specificity (see Table S4, Supplemental Digital Content, http://links.lww.com/QAI/A712). The same was the case when adding random normal errors to the imputed values used in the main analysis (see Table S5, Supplemental Digital Content, http://links.lww.com/QAI/A712). Finally, the results of the analysis to which each patient contributed the same weight were also similar to the main analysis (see Table S6, Supplemental Digital Content, http://links.lww.com/QAI/A712).
The measurement of the CD4 count remains necessary to assess ART eligibility in many settings, and the CD4 count is also important to gauge the risk of clinical progression and guide clinical decisions about prophylactic treatments and screening for opportunistic infections.9,109,10 We used data from the large IeDEA collaboration to develop and validate charts of the risk of virologic failure in adult HIV-positive patients starting ART, based on 2 CD4 counts measured at different times. More than 30,000 adult patients starting ART were involved in the development of the charts, and up to 25,000 patients were included in their validation. The risk charts define optimal ranges of risk at which patients should be tested for VL, assuming that resources permitted the targeted testing of 10%, 20%, or 40% of patients. The PPVs increased substantially with targeted VL testing, even when only 10% of patients were tested and was around 90% with the testing of 20% of patients. Sensitivity also increased: the decision rule based on the testing of 20% of patients identified 50%–70% of patients with virologic failure.
The development of the charts based on 7 South African urban and townships cohorts, the definition of tripartite decision rules using state-of-the-art methods,27 and the thorough validation are important strengths of this study. The risk charts were validated in a large ART program in the Eastern Cape, KwaZulu-Natal, and Mpumalanga provinces of South Africa, which included many rural treatment sites, a treatment program in the greater Lusaka metropolitan area in Zambia, and programs in 12 different countries in the Asia-Pacific region. Our study therefore meets several dimensions of generalizability and applicability, including geographic, spectrum, and methodological transportability.28 Indeed, accuracy was maintained when the charts were tested in patients from different locations, with more or less advanced immunodeficiency, and across sites that differed with respect to data collection, follow-up intervals, and monitoring strategies. Such multiple validations are possible only in large international cohort collaborations, such as IeDEA.12
VL testing to monitor ART is strongly recommended by WHO.9,109,10 However, a 2012 survey found that few programs in sub-Saharan Africa had access to routine VL testing.29 As discussed in detail in recent WHO and UNITAID reports,10,3010,30 scaling up VL testing in resource-limited settings is challenging. For example, plasma obtained from EDTA-coagulated whole blood is the preferred sample for the common VL platforms, but obtaining plasma may not be feasible in remote clinics because of the lack of electricity to operate centrifuges and maintain the cold-chain.10 Point-of-care laboratory tests are being developed both for CD4 cell count and for VL.31 Some point-of-care VL tests are designed to be used in clinics in remote settings, by auxiliary staff and in the absence of a reliable electricity supply. The risk charts may facilitate the cost-effective use of point-of-care and standard VL tests32 and generally support the transition to routine VL monitoring.10
As in previous studies,33–3533–3533–35 we defined virologic failure as a single VL above 1000 copies per milliliter. Virologic failure should not be confused with treatment failure, which is defined by WHO as 2 consecutive VL measurements exceeding 1000 copies per milliliter, within a 3-month interval, with adherence support between measurements, after at least 6 months of using ARV drugs.9 Also, we stress that the risk charts inform decisions on VL testing and adherence support but they do not on their own provide conclusive evidence for switching patients to second-line ART. Furthermore, although our study assessed the accuracy of the risk charts in different patient populations, it did not examine the effects of using these charts to monitor patients starting ART in resource-limited settings. Ideally, different monitoring strategies should be compared in pragmatic randomized trials with patient-relevant outcomes, such as disease progression and mortality. Previous trials compared clinical monitoring with routine CD4 count monitoring, or CD4 count with CD4 count and VL monitoring.36,3736,37 To our knowledge, no trials of risk-based targeted VL monitoring have been performed.
The models underlying the charts might be improved by including other variables predictive of virologic failure. The lack of data on adherence is an important limitation of our study: in the Aid for AIDS program in Southern Africa, adherence assessments based on pharmacy refill data were as accurate as CD4 counts for detecting virologic failure.38 A clinical prediction rule developed at the Sihanouk Hospital Center of Hope in Cambodia (based on adherence, and changes in CD4 cell count and hemoglobin values) had a sensitivity close to 50% and specificity of more than 90%.33 The performance of other scoring systems was similar, with improved sensitivity compared to the WHO criteria.34,3534,35 Few of these scores had undergone external validation, but it is noteworthy that the sensitivity of the Cambodian score dropped to 23% when used in Uganda.34
Our study has other limitations. We only considered patients starting ART at CD4 cell counts of 350 cells per microliter or below. Some countries are moving toward initiating patients at a CD4 count below 500 cells per microliter and initiate ART in all pregnant women regardless of CD4. However, most patients still initiate ART at much lower CD4 counts. For example, in 2013, the median CD4 cell count was 231 cells per microliter in the Republic of South Africa, 212 cells per microliter in Malawi, 205 cells per microliter in Botswana, and 180 cells per microliter in Tanzania.39 The charts will therefore be relevant to many adult patients, and a similar study in children is now under way. Also, the charts will be updated and extended to beyond 5 years as more data accumulate in the IeDEA cohorts.
In conclusion, the risk charts developed and validated in this study should be useful for a range of ART programs and settings, including programs that have relied on CD4 count monitoring and are now transitioning to targeted or routine VL testing. In settings that continue to have no access to VL testing, the charts may provide a more user-friendly alternative to the WHO immunologic criteria for treatment failure.5,95,9 Field studies are now required to clarify the utility of these charts.
The authors thank all patients, clinical, management, data entry, and support staff in the participating clinics.
1. World Health Organization; UNAIDS; UNICEF. Global Update on HIV Treatment 2013: Results, Impact and Opportunities. 2013. Available at: http://www.unaids.org/en/media/unaids/contentassets/documents/unaidspublication/2013/20130630_treatment_report_en.pdf
. Accessed June 26, 2014.
2. World Health Organization. WHO HIV Drug Resistance Report 2012. Available at: http://apps.who.int/iris/bitstream/10665/75183/1/9789241503938_eng.pdf
. Accessed June 26, 2014.
3. Sigaloff KC, Hamers RL, Wallis CL, et al.. Unnecessary antiretroviral treatment switches and accumulation of HIV resistance mutations; two arguments for viral load monitoring
in Africa. J Acquir Immune Defic Syndr. 2011;58:23–31.
4. Keiser O, Orrell C, Egger M, et al.. Public-health and individual approaches to antiretroviral therapy
: township South Africa and Switzerland compared. PLoS Med. 2008;5:e148.
5. World Health Organization. Antiretroviral therapy
for HIV infection in adults and adolescents in resource-limited settings: towards universal access. Recommendations for a public health approach. 2006. Available at: http://www.who.int/hiv/pub/guidelines/artadultguidelines.pdf
. Accessed June 26, 2014.
6. Keiser O, MacPhail P, Boulle A, et al.. Accuracy of WHO CD4 cell count criteria for virological failure of antiretroviral therapy
. Trop Med Int Health. 2009;14:1220–1225.
7. Mee P, Fielding KL, Charalambous S, et al.. Evaluation of the WHO criteria for antiretroviral treatment failure among adults in South Africa. AIDS Lond Engl. 2008;22:1971–1977.
8. Keiser O, Tweya H, Boulle A, et al.. Switching to second-line antiretroviral therapy
in resource-limited settings: comparison of programmes with and without viral load monitoring
. AIDS. 2009;23:1867–1874.
9. World Health Organization. Consolidated Guidelines on the Use of Antiretroviral Drugs for Treating and Preventing HIV Infection: Recommendations for a Public Health Approach. Geneva, Switzerland: World Health Organization; 2013. Available at: http://apps.who.int/iris/bitstream/10665/85321/1/9789241505727_eng.pdf
. Accessed June 26, 2014.
10. World Health Organization. March 2014 Supplement to the 2013 Consolidated Guidelines on the Use of Antiretroviral Drugs for Treating and Preventing HIV Infection. Recommendations for a Public Health Approach. Geneva, Switzerland: World Health Organization; 2014. Available at: http://apps.who.int/iris/bitstream/10665/104264/1/9789241506830_eng.pdf?ua=1
. Accessed June 26, 2014.
11. Lynen L, An S, Koole O, et al.. An algorithm to optimize viral load testing in HIV-positive patients with suspected first-line antiretroviral therapy
failure in Cambodia. J Acquir Immune Defic Syndr. 2009;52:40–48.
12. Egger M, Ekouevi DK, Williams C, et al.. Cohort profile: the international epidemiological databases to evaluate AIDS (IeDEA) in sub-Saharan Africa
. Int J Epidemiol. 2012;41:1256–1264.
13. Gange SJ, Kitahata MM, Saag MS, et al.. Cohort profile: the North American AIDS cohort collaboration on research and design (NA-ACCORD). Int J Epidemiol. 2007;36:294–301.
14. McGowan CC, Cahn P, Gotuzzo E, et al.. Cohort profile: Caribbean, Central and South America Network for HIV research (CCASAnet) collaboration within the International Epidemiologic Databases to Evaluate AIDS (IeDEA) programme. Int J Epidemiol. 2007;36:969–976.
15. Lawn SD, Harries AD, Anglaret X, et al.. Early mortality among adults accessing antiretroviral treatment programmes in sub-Saharan Africa
. AIDS. 2008;22:1897–1908.
16. Boulle A, Van Cutsem G, Hilderbrand K, et al.. Seven-year experience of a primary care antiretroviral treatment programme in Khayelitsha, South Africa. AIDS. 2010;24:563–572.
17. Eshun-Wilson I, Plas HV, Prozesky HW, et al.. Combined antiretroviral treatment initiation during hospitalization: outcomes in South African adults. J Acquir Immune Defic Syndr. 2009;51:105–106.
18. Ojikutu BO, Zheng H, Walensky RP, et al.. Predictors of mortality in patients initiating antiretroviral therapy
in Durban, South Africa. S Afr Med J. 2008;98:204–208.
19. Fox MP, Maskew M, Macphail AP, et al.. Cohort profile: the Themba Lethu Clinical Cohort, Johannesburg, South Africa. Int J Epidemiol. 2012. Available at: http://www.ncbi.nlm.nih.gov/pubmed/22434860
. Accessed June 26, 2014.
20. Hoffmann CJ, Charalambous S, Thio CL, et al.. Hepatotoxicity in an African antiretroviral therapy
cohort: the effect of tuberculosis and hepatitis B. AIDS. 2007;21:1301–1308.
21. Houlihan CF, Bland RM, Mutevedzi PC, et al.. Cohort profile: Hlabisa HIV treatment and care programme. Int J Epidemiol. 2011;40:318–326.
22. Fatti G, Grimwood A, Bock P. Better antiretroviral therapy
outcomes at primary healthcare facilities: an evaluation of three tiers of ART services in four South African provinces. PLoS One. 2010;5:e12888.
23. Stringer JS, Zulu I, Levy J, et al.. Rapid scale-up of antiretroviral therapy
at primary care sites in Zambia: feasibility and early outcomes. JAMA. 2006;296:782–793.
24. Zhou J, Kumarasamy N, Ditangco R, et al.. The TREAT Asia HIV Observational Database: baseline and retrospective data. J Acquir Immune Defic Syndr. 2005;38:174–179.
25. Wood SN. Fast stable restricted maximum likelihood and marginal likelihood estimation of semiparametric generalized linear models. J R Stat Soc B. 2011;73:3–36.
26. Wood SN. Thin-plate regression splines. J R Stat Soc B. 2003;65:95–114.
27. Liu T, Hogan JW, Wang L, et al.. Optimal allocation of gold standard testing under constrained availability: application to assessment of HIV treatment failure. J Am Stat Assoc. 2013;108:1173–1188.
28. Justice AC, Covinsky KE, Berlin JA. Assessing the generalizability of prognostic information. Ann Intern Med. 1999;130:515–524.
29. Lynch S, Ford N, van Cutsem G, et al.. Public health. Getting HIV treatment to the most people. Science. 2012;337:298–300.
30. UNITAID Secretariat. HIV/AIDS Diagnostic Technology Landscape. 3rd ed. Geneva, Switzerland: World Health Organization; 2013. Available at: http://www.unitaid.eu/images/marketdynamics/publications/UNITAID-HIV_Diagnostic_Landscape-3rd_edition.pdf
. Accessed June 26, 2014.
31. Rowley CF. Developments in CD4 and viral load monitoring
in resource-limited settings. Clin Infect Dis. 2014;58:407–412.
32. Estill J, Egger M, Blaser N, et al.. Cost-effectiveness of point-of-care viral load monitoring
of ART in resource-limited settings: mathematical modelling study. AIDS. 2013. Available at: http://www.ncbi.nlm.nih.gov/pubmed/23462219
. Accessed June 26, 2014.
33. Phan V, Thai S, Koole O, et al.. Validation of a clinical prediction score to target viral load testing in adults with suspected first-line treatment failure in resource-constrained settings. J Acquir Immune Defic Syndr. 2013;62:509–516.
34. Abouyannis M, Menten J, Kiragga A, et al.. Development and validation of systems for rational use of viral load testing in adults receiving first-line ART in sub-Saharan Africa
. AIDS. 2011;25:1627–1635.
35. Meya D, Spacek LA, Tibenderana H, et al.. Development and evaluation of a clinical algorithm to monitor patients on antiretrovirals in resource-limited settings using adherence, clinical and CD4 cell count criteria. J Int AIDS Soc. 2009;12:3.
36. Mugyenyi P, Walker AS, Hakim J, et al.. Routine versus clinically driven laboratory monitoring
of HIV antiretroviral therapy
in Africa (DART): a randomised non-inferiority trial. Lancet. 2010;375:123–131.
37. Mermin J, Ekwaru JP, Were W, et al.. Utility of routine viral load, CD4 cell count, and clinical monitoring
among adults with HIV receiving antiretroviral therapy
in Uganda: randomised trial. BMJ. 2011;343:d6792.
38. Bisson GP, Gross R, Bellamy S, et al.. Pharmacy refill adherence compared with CD4 count changes for monitoring
HIV-infected adults on antiretroviral therapy
. PLoS Med. 2008;5:e109.
39. Ford N, Mills EJ, Egger M. Immunodeficiency at Start of Antiretroviral Therapy
: The Persistent Problem of Late Presentation to Care. Clin Infect Dis. 2015;60:1128–1130.