In 2016, the Joint United Nations Programme on HIV and AIDS (UNAIDS) reported that 17 million HIV-infected people worldwide were receiving antiretroviral therapy (ART), most of whom live in resource-limited settings (RLSs); this dramatic increase in ART access in RLS is a major public health success.1 However, a key challenge facing HIV treatment programs in RLS is ART monitoring2–4; the vast majority of HIV treatment programs in RLS have primarily relied on clinical and immunological criteria to determine therapeutic efficacy of ART. Although widely recognized as the gold standard for ART monitoring, determination of virologic failure (VF) through routine viral load (VL) monitoring has largely been unavailable. The UNAIDS estimates that 36% of HIV-infected patients on ART in low- and middle-income countries can access VL monitoring and projects, and this will increase to only 57% by 2019. The main barriers to VL access have traditionally been cost and the numerous infrastructure requirements for collection and transport of plasma.5–8
Increasing numbers of patients on ART in RLS in the absence of routine VL9 monitoring has raised serious concerns about the development of resistance to first-line ART regimens and potential implications for second-line therapy.9 Many programs in sub-Saharan Africa report that less than 3%–5% of all ART patients are receiving second-line ART regimens.7,10–12 The low proportion of patients on second-line regimens adds programmatic evidence to numerous studies that demonstrate the inferiority of clinical and immunologic criteria for early identification of VF,13 thus favoring the accumulation of drug-resistance–associated mutations; this is a major cause of treatment failure and has been associated with higher mortality rates.14–16
Recent developments have galvanized efforts to increase access to routine VL monitoring. The 2016 World Health Organization (WHO)-consolidated guidelines recommend routine VL monitoring for patients on ART,17 and the ambitious UNAIDS 90-90-90 goals call for a VL suppression rate of 90% among all patients on ART.18 This high-level advocacy has been accompanied by marked decreases in VL test costs; many RLS countries can now purchase VL-testing reagents for as little as $9.50 per test.19 However, the infrastructure requirements for expansion of conventional VL monitoring (plasma) remain daunting. This has led to increased focus on the use of dried blood spots (DBSs) prepared from whole blood for VL testing.
DBS resolves many of the complex issues of plasma collection, preparation, storage, and transport, and studies of its use in VL monitoring have been promising.20–22 However, key questions remain on factors affecting the diagnostic accuracy of DBS for VF; these factors include the type of blood collected (venous vs. capillary blood), quantification of blood used to prepare DBS cards (measured volume of blood vs. unknown volume), VL test platform, and conditions of specimen preparation [field vs. reference laboratory and level of training of health care workers preparing specimens]. There are limited data on DBS VL quantification performed on whole blood obtained through finger, heel, and toe sticks (capillary blood) and on DBS VL quantification prepared under field conditions by nonreference laboratory health care workers.20–22
To further investigate these factors and to inform VL scale-up in RLS, we evaluated the performance of DBS for VL monitoring using simplified DBS-spotting modalities in routine patient-care conditions in adults and children on ART in Nairobi and western Kenya.
Study Setting and Participants
At the time of study implementation in 2013, Kenya was implementing targeted plasma VL testing when treatment failure was suspected based on clinical or immunological indications, before single-drug substitution, and to confirm optimal viral suppression among pregnant women on ART. In 2014, Kenya transitioned to routine VL monitoring as per the WHO guidelines and included an option for DBS.
We conducted a cross-sectional study in 12 purposively selected HIV clinics in Nairobi and western regions of Kenya. Sites (3 in Nairobi and 9 in western Kenya) were selected based on the high volume of plasma VL testing (average of 30 or more VL tests/mo) and the use of DBS for early infant diagnosis (EID).
Two groups of participants who presented for routine HIV appointments and consented for the study were consecutively enrolled: (1) HIV-infected adults aged ≥15 years and on ART for ≥6 months, and targeted VL tests were requested by their clinicians; (2) HIV-infected children aged 7 months to <15 years and on ART for ≥6 months regardless of whether a targeted VL test was requested. The wider pediatric enrollment criteria were chosen to increase enrollment numbers because of low numbers of children on ART relative to that of adults. Demographic information and ART history were extracted for each participant from routine VL-request forms irrespective of the enrollment criteria.
Blood Collection and Processing
Venous blood and capillary blood were collected from each participant and were used to prepare 1 plasma sample (venous) and 3 DBS samples (1 venous and 2 capillary). Plasma specimens were prepared from the venous blood drawn into the EDTA collection tube within 5 hours by centrifugation at 1600g for 10 minutes and stored at −20°C while waiting for shipment to the testing laboratories. DBSs were prepared using Schleicher & Schuell 903 (S&S 903–W041) filter paper (see Figure, Supplemental Digital Content 1, http://links.lww.com/QAI/A963, DBS sample collection methods). Venous-blood DBS (V-DBS) was prepared by spotting 50 μL of venous blood collected into an EDTA tube onto each of 5 preprinted circles using a disposable plastic transfer pipette. Two additional DBS cards were prepared from finger-prick capillary blood by either spotting 2 drops of blood from a 50-μL microcapillary tube onto each of 3 preprinted circles in a card (M-DBS) or by directly spotting 1 drop of blood from the finger-prick site onto each of 3 circles in a card (D-DBS). The 2 capillary blood sample collection methods (M-DBS and D-DBS) were selected because they are simplified field practices currently used in Kenya to prepare DBS for EID. Prepared DBS cards were dried at ambient temperature overnight on a drying rack, placed individually in a glassine envelope within a zip-lock bag containing 5 desiccants and 1 humidity monitor, and were stored at the sites at ambient temperature.
Within 1 week of sample collection, plasma and DBS samples were transported to the Kenya Medical Research Institute Laboratory in Kisumu in western Kenya and the National HIV Reference Laboratory (NHRL) in Nairobi. Plasma samples were shipped under frozen conditions, and DBS cards were shipped at ambient temperature. On receipt at the testing laboratory, plasma and DBS samples were stored at −20°C until the time of testing.
VL Measurement in Plasma and DBS Specimens
Plasma VL Measurement
Plasma VL was determined according to the manufacturer's procedures using the COBAS Ampliprep/COBAS TaqMan (CAP/CTM) HIV-1 test, version 2.0 (Roche Molecular Diagnostics, Pleasanton, CA) and Abbott RealTime HIV-1 assay (Abbott Laboratories, Wiesbaden, Germany).
DBS VL Measurement
Laboratory-validated CAP/CTM HIV-1 test, version 2.0, using Specimen Pre-Extraction buffer and 1 50 μL of spot (Roche Molecular Diagnostics) and Abbott RealTime HIV-1 assay using Bulk Lysis Buffer and two 50-μL spots were used to measure DBS VL as per the manufacturer's recommendations. Abbott and CAP/CTM assays were used to perform V-DBS VL measurement, and because of limited resources and DBS spots for testing, M-DBS and D-DBS VL measurements were only taken on Abbott. Because use of DBS for VL testing was not recommended in the Kenya HIV Treatment Guidelines at the time of study, DBS VL results were not returned to the patients. For both DBS VL assays, we used 1000 copies/mL as the lower limit of detection, taking into consideration the recommendation by the WHO on the definition of VF for patients on ART.17
VL results were summarized for all sample types [median and interquartile range (IQR) VL, median differences in VL]. We conducted 2 analyses of VL for each type of DBS: agreement of absolute VL results between V-, M-, and D-DBS and plasma with Bland–Altman plots and diagnostic accuracy indicators (sensitivity, specificity, misclassification rate, and kappa coefficient) of V-, M-, and D-DBS for VF (defined by plasma VL ≥1000 copies/mL). The analyses of VF were conducted for 3 DBS VF thresholds (1000, 3000, and 5000 copies/mL) to ascertain the effect on clinical misclassification rates. Downward misclassification (false-negative for diagnosis of VF) was defined as a sample ≥1000 copies/mL by plasma, but <1000, 3000, or 5000 by DBS for each respective DBS-threshold analysis. Upward misclassification (false-positive for diagnosis of VF) was defined as a sample <1000 copies/mL by plasma, but ≥1000, 3000, or 5000 by DBS for each respective DBS-threshold analysis. These analyses were performed separately for children and adults, as well as the overall study population.
As a quality check, results from the CAP/CTM plasma were compared with the Abbott plasma result. Any participant with greater than 0.7 log difference between the 2 plasma results was removed from the final data set, as per standard quality assurance practice.23 In addition, patients without at least one DBS result and 1 valid Abbott plasma result were also excluded because of inability to do paired comparisons.
Statistical analyses were performed using SAS statistical software, version 9.3 (SAS Institute Inc., Cary, NC). Differences between DBS performance of adult and children samples at different testing platforms, DBS types, and VL thresholds were compared by examining the corresponding 95% confidence interval (CI).
The study was approved by the Centers for Disease Control and Prevention, Division of HIV and Tuberculosis (DGHT), Associate Director for Science, and Kenya Medical Research Institute Ethics Review Committee.
Overall, 1035 participants consented for the study. Because of procedural site-level reasons and missing laboratory forms for 82 participants, blood samples were collected and tested from 953 participants. After exclusion of participants who did not have a valid Abbott plasma result and at least one Abbott DBS result because of missing or invalid results or not meeting the plasma quality check between CAP/CTM and Abbott, 793 participants had samples available for analysis (Figure 1).
Table 1 shows the demographic and clinical characteristics of the 793 participants included in the analysis. There were 416 (52.5%) adults and 377 (47.5%) children. For adult participants, almost two-thirds [266/415 (64.1%)] were women; the median age was 36.3 years (IQR 30.4–43.7); and the median time on ART was 49.9 months (IQR 28.8–72.8). VF among all adults, defined as plasma VL ≥1000 copies/mL, was 31.0% (129/416). Among those with clinical or immunological indication for VL testing, 37.4% had VF (91/243), whereas only 22.0% (38/173) had VF among those without clinical or immunological indication.
Of the 377 children, 175 (46.9%) were of female sex. The median age was 7.3 years (IQR 4.4–9.6), and the median time on ART was 41.9 months (IQR 26–59.6). Among all children, 32.9% (124/377) had VF. Among the 45 children with a clinical or immunological indication for VL testing, VF was almost double (60.0%, 24/45), whereas it was 29.2% (97/323) for children without an indication for VL testing.
A separate analysis found no statistically significant differences in the plasma–DBS comparison between adults and children for all comparison analyses (data not shown). The results presented below are thus a combined analysis of adults and children.
Plasma–DBS VL Comparison
Bland–Altman plots show minimal mean differences ranging from −0.05 to 0.09 (SD range −1.03 to 0.93) between the plasma and DBS VL on both the Abbott and CAP/CTM platforms regardless of DBS types (V-, M-, or D-DBS) (Figure 2).
Sensitivity, Specificity, and Misclassification of DBS VL Measurements Compared With Those of Plasma VL in Defining VF
We conducted analyses on the sensitivity, specificity, and kappa agreement as well as misclassification rates of DBS VL in determining VF using DBS testing platforms (Abbott or CAP/CTM) and DBS types using 3 DBS VL thresholds of 1000, 3000, and 5000 copies/mL in comparison with Abbott plasma VL at a constant 1000-copies/mL threshold among all the participants. Table 2 summarizes the sensitivity, specificity, kappa agreement, and misclassification rates.
DBS VLs tested on the Abbott platform were highly comparable with plasma VLs; Kappa values ranged from 0.82 to 0.87 for all DBS sample types using the 3 DBS VL cutoff thresholds. Sensitivity ranged from 82.2% to 90.3%, whereas specificity ranged from 93.1% to 98.8%. As the DBS VL cutoff threshold increased, the sensitivity for predicting VF decreased and specificity increased, when Abbott platform was used for DBS VL testing. For example, V-DBS tested using the Abbott platform had a peak sensitivity of 90.1% (95% CI: 85.7 to 93.6) at 1000-copies/mL VL cutoff threshold and a peak specificity of 98.4% (95% CI: 96.9 to 99.3) at the 5000 copies/mL VL cutoff threshold.
Similarly, as the DBS VL cutoff thresholds for determining VF increases, the proportion of upward misclassification (patients classified with VF who actually had viral suppression) decreases, whereas the proportion of downward misclassification (patients classified as undetectable who actually had VF) increases. For example, the lowest proportion of upward misclassification for Abbott V-DBS was 1.6% at 5000-copies/mL cutoff threshold, whereas the lowest proportion of downward misclassification was 9.9% at the 1000 VL cutoff threshold. Overall, there were no significant differences in sensitivity, specificity, agreement, or misclassification between DBS types tested on the Abbott platform within each VL threshold category (Table 2).
In contrast, kappa agreement analysis revealed that V-DBS tested using the CAP/CTM platform had significantly lower kappa values (0.20, 0.41, and 0.52 for 1000, 3000, and 5000 copies/mL VL cutoff thresholds, respectively) when compared with plasma VL tested using the Abbott platform. Although the sensitivity was high at each VL cutoff threshold (94.4, 88.4, and 79.7 for 1000, 3000, and 5000 VL cutoffs, respectively), the specificity was significantly lower than DBS VL tested using the Abbott platform (33.0, 60.9, and 77.0 for 1000, 3000, and 5000 cutoffs, respectively, Table 2).
To explore the nature of VF misclassification using DBS VL tested on both Abbott and CAP/CTM platforms, we stratified the plasma VL results tested using the Abbott platform into 4 categories and compared the misclassification rates among these 4 VL-level categories (Table 3). For all 3 DBS types tested using the Abbott platform, more than 90% of the patients identified as VF by plasma VL were correctly determined as VF by DBS VL. Furthermore, more than 90% of misclassification occurred between the 2 lowest categories of <1000 and 1000–9999 copies/mL, and 0%–1.4% misclassification rates were noticed in the ≥100,000 copies/mL category. In contrast, DBS VL tested on the CAP/CTM platform correctly identified only 33.1% of those patients with VF determined by plasma VL and 11.2% of the misclassification fell in the 10,000–99,000 copies/mL category.
This study is the first in Kenya and one of the largest in the region to evaluate the utility of V-DBS and capillary-blood DBSs collected in the clinic setting using RLS-appropriate methods for determination of VF in both adults and children on ART. Although previous studies have shown good concordance between DBS and plasma for quantification of VL, many have focused on specific populations such as adults or those naive to ART, and most have been conducted in controlled laboratory settings.
Several studies which have shown good concordance between V-DBS and plasma on multiple platforms, including Abbott and CAP/CTM, used calibrated micropipettes to prepare V-DBS,20,24,25 whereas this study used disposable transfer pipettes to prepare the V-DBS sample. Two studies in Malawi used capillary blood and also found good concordance; one transferred drops using a “graduated capillary” tube, whereas the other used a 50-mL capillary tube similar to this study's method.20,22 A Zimbabwe study found excellent correlation using a “directly dropped” method, such as D-DBS in this study, tested on the NucliSENS v2.0 assay.21 The less-precise sample collection methods used in this study aimed to be simple and practical for RLS, particularly the capillary-blood options, as many settings cannot perform venous-blood collection. The M-DBS and D-DBS sample collection methods may especially be useful in pediatric populations in whom venipuncture can often be challenging. In addition, use of the disposable transfer pipette for the venous sample was seen as a less-precise method to better reflect RLSs.
Abbott VL results from all DBS types (V-, M-, and D-DBS) for adults and children showed strong correlation with the gold standard Abbott plasma, indicating that all 3 methods are acceptable alternatives to plasma VL testing on the Abbott platform in identification of VF patients. Results from this study were thus comparable with similar studies conducted in both ideal laboratory conditions and field conditions that used venous blood in EDTA or capillary blood collected in EDTA or other field-based more “advanced” sample collection methods on the Abbott platform.22,26 The comparatively poor performance of the CAP/CTM standard platform for detection of VF from DBS samples is in accordance with a Namibia study that used venous blood micropipetted onto a DBS card and found 0.26 and 0.55 specificity at 1000 and 5000 copies/mL cutoff, respectively, and 0.99 sensitivity at both thresholds.25
This study highlights the important clinical implications of different DBS thresholds for classification of VF. An upward misclassification means that patients who are virologically suppressed are incorrectly categorized as virologically failing, potentially leading to an unnecessary switch to a second or third line regimen. However, based on the WHO and Kenya's current VL guidelines, a repeat VL would be performed before any treatment switch, increasing the likelihood for subsequent correct classification and avoiding an unnecessary regimen change. A downward misclassification, however, means that patients who are virologically failing are incorrectly categorized as virologically suppressed. According to the WHO and Kenya's national VL algorithms, these patients would not be scheduled for a repeat VL until 12 months later or until they develop signs of treatment failure and thus would be continued on a failing ART regimen. This increases the risk of clinical and immunological deterioration, development of HIV drug–resistance mutations, and transmission of HIV.27
Based on these considerations, minimizing downward misclassification (or maximizing sensitivity) is preferred from a clinical and programmatic standpoint when acceptable sensitivity and specificity exist for the given platform. Therefore, when using the Abbott platform, this study indicates that the optimal threshold for determination of VF using DBS is 1000 copies/mL; at this threshold, false-negative misclassification is lowest for all DBS types, and no significant difference in performance exists between the 3 DBS sample types. Although the proportion of false negatives on the CAP/CTM platform is even lower than that of Abbott at the 1000-copies/mL threshold, the low specificity is prohibitive because it would lead to unacceptably high repeat VL testing and risk of unnecessary switches to costly and scarce second- and third-line ART regimens.
Prevalence of VF among adults and pediatrics was high in our study population, which was expected among adults who had primarily targeted VL. Determinants of VF among children, many of whom did not meet the criteria for targeted VL, will be explored in another publication. These findings support the 2016 WHO consolidated guidelines which recommend routine VL testing as the preferred method for monitoring patients on ART and modified the VF threshold for DBS to 1000 copies/mL.17
A limitation of this study is the use of a convenience purposive sample of urban and peri-urban high-volume ART facilities; however, other operations research has shown that use of DBS is feasible in remote RLSs.28,29 In addition, all samples in this study were collected by laboratory technicians or phlebotomists and many smaller lower level sites in RLS may not have the same cadres. However, the success of other programs that use similar methods of DBS collection, such as EID, suggests that lower level sites and other health care cadres can be successfully supported to produce valid DBS samples for VL monitoring. This study also did not examine the diagnostic accuracy of DBS when samples are stored at ambient temperature for a longer period. It is plausible that HIV RNA can degrade affecting the overall diagnostic accuracy of DBS. Analytical limitations were primarily because of the inability to perform repeat VL testing in the cases of failure or invalid results because only 3 spots were collected for M- and D-DBS and the Abbott platform requires 2 spots for processing.
In recognition that DBS should not replace plasma, but rather be used to expand access where needed, we conclude that DBS VL testing using two 50-μL spots on the Abbott platform is a comparable and practical alternative to plasma for quantification of VL among ART-experienced adults and children in RLS and can support countries to build sustainable routine monitoring systems to achieve the 90% viral suppression global target. All 3 methods of DBS preparation evaluated in this study (venous, microcapillary, and direct drop) were acceptable, and a DBS cutoff of 1000 copies/mL was optimal because of minimized downward misclassification and should be considered for VL monitoring using DBS. The CAP/CTM version 2.0 platform, despite having high sensitivity, is an inferior alternative for VL testing using DBS because of the significantly low specificity that would potentially result in a high number of unnecessary regimen switches. Program managers and policy makers may determine which method(s) of DBS preparation are most appropriate based on the local context and available resources to support supplies, management of biological waste, and human resource capacity. Programs should also consider new DBS assays, including the free viral elution protocol released after this study was implemented,30 to further inform planning for use of DBS for VL monitoring.
The authors thank the study participants who agreed to be in the study and the study site staff who supported study implementation. They also thank Dr. Tedd Ellerbrock of CDC for his invaluable input in study design and protocol review. The authors also thank the Kenya Medical Research Institute (KEMRI) and the Kenya Ministry of Health, whose participation made this study possible. This article is published with the permission of the Director of KEMRI.
VL-DBS Study Group: A. Adega, S.A., J. Akinyi, K.A., J. Atsyaya, M.B., F. Basiye, L.N.B., D. Ellenberger, A. Gichangi, M.J., L. Kingwara, D. Kwaro, C. Masson, I. Mohammed, I.M., C. Munguti, K. Muthusi, H. Muttai, S.M., J. Mwangi, G. Mwenda, K.N., L.N., N. Okeyo, J. Okonji, J.O., P. Omollo, F. Omondi, J. Orwa, E.R., J. Sabatier, M.E.S., M.U., J. Wagude, R. Warutere, C.Y., and C.Z.
1. UNAIDS. Global AIDS Update: UNAIDS Report. Geneva, Switzerland: UNAIDS; 2016.
2. Fiscus SA, Cheng B, Crowe SM, et al. HIV-1 viral load assays for resource-limited settings. PLoS Med. 2006;3:e417.
3. Roberts T, Bygrave H, Fajardo E, et al. Challenges and opportunities for the implementation of virological testing in resource-limited settings. J Int AIDS Soc. 2012;15:17324.
4. Stevens WS, Marshall TM. Challenges in implementing HIV load testing in South Africa. J Infect Dis. 2010;201(suppl 1):S78–S84.
5. Elbeik T, Chen YM, Soutchkov SV, et al. Global cost modeling analysis of HIV-1 and HCV viral load assays. Expert Rev Pharmacoecon Outcomes Res. 2003;3:383–407.
6. Elbeik T, Dalessandro R, Loftus RA, et al. HIV-1 and HCV viral load cost models for bDNA: 440 Molecular System versus real-time PCR AmpliPrep/TaqMan test. Expert Rev Mol Diagn. 2007;7:723–753.
7. Sawe FK, McIntyre JA. Monitoring HIV antiretroviral therapy in resource-limited settings: time to avoid costly outcomes. Clin Infect Dis. 2009;49:463–465.
8. Schneider K, Puthanakit T, Kerr S, et al. Economic evaluation of monitoring virologic responses to antiretroviral therapy in HIV-infected children in resource-limited settings. AIDS. 2011;25:1143–1151.
9. Havlir DV, Marschner IC, Hirsch MS, et al. Maintenance antiretroviral therapies in HIV-infected subjects with undetectable plasma HIV RNA after triple-drug therapy. N Engl J Med. 1998;339:1261–1268.
10. Keiser O, Tweya H, Boulle A, et al. Switching to second-line antiretroviral therapy in resource-limited settings: comparison of programmes with and without viral load monitoring. AIDS. 2009;23:1867–1874.
11. Sigaloff KC, Hamers RL, Wallis CL, et al. Unnecessary antiretroviral treatment switches and accumulation of HIV resistance mutations; two arguments for viral load monitoring in Africa. J Acquir Immune Defic Syndr. 2011;58:23–31.
12. Smith DM, Schooley RT. Running with scissors: using antiretroviral therapy without monitoring viral load. Clin Infect Dis. 2008;46:1598–1600.
13. Tucker JD, Bien CH, Easterbrook PJ, et al. Optimal strategies for monitoring response to antiretroviral therapy in HIV-infected adults, adolescents, children and pregnant women: a systematic review. AIDS. 2014;28(suppl 2):S151–S160.
14. Kantor R, Shafer RW, Follansbee S, et al. Evolution of resistance to drugs in HIV-1-infected patients failing antiretroviral therapy. AIDS. 2004;18:1503–1511.
15. Goetz MB, Holodniy M, Poulton JS, et al. Utilization and access to antiretroviral genotypic resistance testing and results within the US Department of Veterans Affairs. J Acquir Immune Defic Syndr. 2006;41:59–62.
16. Cozzi-Lepri A, Phillips AN, Ruiz L, et al. Evolution of drug resistance in HIV-infected patients remaining on a virologically failing combination antiretroviral therapy regimen. AIDS. 2007;21:721–732.
17. WHO. Consolidated Guidelines on the Use of Antiretroviral Drugs for Treating and Preventing HIV Infection. Recommendations for a Public Health Approach. 2nd ed. Geneva, Switzerland: World Health Organization; 2016.
18. UNAIDS90-90-90 an Ambitious Treatment Target to Help End the AIDS Epidemic. Geneva, Switzerland: Joint United Nations Programme on HIV/AIDS (UNAIDS); 2014.
19. Roche_Molecular_Diagnostics. Global Access Program FAQ Document. Pleasanton, CA: Roche Molecular Systems; 2015.
20. Fajardo E, Metcalf CA, Chaillet P, et al. Prospective evaluation of diagnostic accuracy of dried blood spots from finger prick samples for determination of HIV-1 load with the NucliSENS Easy-Q HIV-1 version 2.0 assay in Malawi. J Clin Microbiol. 2014;52:1343–1351.
21. Napierala Mavedzenge S, Davey C, Chirenje T, et al. Finger prick dried blood spots for HIV viral load measurement in field conditions in Zimbabwe. PLoS One. 2015;10:e0126878.
22. Rutstein SE, Kamwendo D, Lugali L, et al. Measures of viral load using Abbott RealTime HIV-1 Assay on venous and fingerstick dried blood spots from provider-collected specimens in Malawian District Hospitals. J Clin Virol. 2014;60:392–398.
23. Jennings CH, Granger B, Wager S, et al. Cross-platform analysis of HIV-1 RNA data generated by multicenter assay validation study with wide geographic representation. J Clin Microbiol. 2012;50:2737–2747.
24. Andreotti M, Pirillo M, Guidotti G, et al. Correlation between HIV-1 viral load quantification in plasma, dried blood spots, and dried plasma spots using the Roche COBAS Taqman assay. J Clin Virol. 2010;47:4–7.
25. Sawadogo S, Shiningavamwe A, Chang J, et al. Limited utility of dried blood and plasma spot based screening for antiretroviral treatment failure with COBAS Ampliprep/TaqMan HIV-1 v2.0. J Clin Microbiol. 2014;52:3878–3883.
26. Arredondo M, Garrido C, Parkin N, et al. Comparison of HIV-1 RNA measurements obtained by using plasma and dried blood spots in the automated abbott real-time viral load assay. J Clin Microbiol. 2012;50:569–572.
27. NASCOP. Guidelines on Use of Antiretroviral Drugs for Treating and Preventing HIV Infection: A Rapid Advice, 2014. Nairobi, Kenya
: Ministry of Health, Government of Kenya
28. Boillot F, Serrano L, Muwonga J, et al. Implementation and operational research: programmatic feasibility of dried blood spots for the virological follow-up of patients on antiretroviral treatment in Nord Kivu, Democratic republic of the Congo. J Acquir Immune Defic Syndr. 2016;71:e9–e15.
29. Rutstein SE, Hosseinipour MC, Kamwendo D, et al. Dried blood spots for viral load monitoring in Malawi: feasible and effective. PLoS One. 2015;10:e0124748.
30. Wu X, Crask M, Ramirez H, et al. A simple method to elute cell-free HIV from dried blood spots improves their usefulness for monitoring therapy. J Clin Virol. 2015;65:38–40.