The HIV care continuum has gained significant traction as a useful framework among both public health practitioners and academic researchers. The continuum estimates the proportion of HIV-infected persons who are HIV diagnosed, linked to and retained in care, initiated on antiretroviral therapy, and virally suppressed.1,2 Public health practitioners are increasingly being called upon (and required)3 to estimate and improve the HIV care continuum in their jurisdiction (e.g., state, dependent area, and metropolitan statistical area). Public health estimates of the HIV care continuum typically rely on CD4 counts and viral loads reported to HIV surveillance.4 In this issue of Sexually Transmitted Diseases, Toren et al.5 use surveillance data to show that the time between HIV diagnosis and viral suppression for persons diagnosed as having HIV in Seattle & King County decreased dramatically from 2007 to 2013. We welcome the contribution by Toren et al.5 as a valuable and timely contribution to the literature on the effects of rapidly changing treatment paradigms on improvements in the HIV care continuum.
Being population-based, public health surveillance data offer a unique opportunity to obtain unbiased estimates of the HIV care continuum. Estimation of the HIV care continuum is inherently limited in clinical cohorts. Namely, clinical cohorts represent a nonrandom subset of all HIV-diagnosed persons who (1) ever entered care and (2) remained in care at the clinic where the cohort is based. Because surveillance data are population-based, the recent work by Toren et al.5 has the advantage of being able to include all persons diagnosed as having HIV in Seattle & King County, including persons who never enter care and never achieve viral suppression. The inclusion of persons diagnosed as having HIV avoids bias inherent in clinic-based analyses: (1) of absolute time to viral suppression if persons who never enter care and thus never suppress are excluded from the risk set, (2) of characteristics associated with prompt viral suppression if HIV-infected persons who enter care differ from those who do not enter care, and (3) of changing time to viral suppression over calendar time if the characteristics of HIV-infected persons who entered care varied over time.
Most clinics do not have the resources for extensive patient tracing if patients fail to return for follow-up. Patients may leave a clinic because they move out of the area, switch clinical providers within the same area, or fall out of clinical care; this information is not generally recorded. In one clinical cohort, loss to clinic was substantial (5-year risk, 46%), yet mortality among persons lost to clinic was similar to mortality among persons retained in care, suggesting that some of those who were lost to clinic likely entered care elsewhere and loss to clinic may overestimate loss to care.6 Because public health surveillance data are not clinic based, they (hypothetically) capture laboratory measures for HIV-infected patients, regardless of the provider who ordered the laboratories, and therefore follow patients if they change clinics within the same area. (Rules for laboratory reporting and levels of compliance with reporting rules vary across jurisdictions.) However, public health surveillance data still cannot typically distinguish between persons who move out of jurisdiction and persons who are lost to care. Seattle & King County is one of very few public health agencies that have gathered additional information to distinguish out-migration from loss to care so that emigrants can be censored.7
Indeed, Seattle & King County is also one of a minority of public health jurisdictions that have been collecting laboratory surveillance data consistently enough to be able to estimate changes in time to viral suppression. Name-based reporting of HIV diagnoses was not implemented in all 50 states until 2008,8 a full year after follow-up in the analysis by Toren et al.5 began. From 2011 to 2013, of 50 states and 6 dependent areas, only 19, 18, and 28 jurisdictions (respectively) had laboratory data that were estimated to be at least 95% complete.9–11 These observations serve to illustrate both how quickly HIV surveillance rules have changed to incorporate laboratory reporting, and yet still how few jurisdictions have data that are complete enough to provide reliable information on the HIV care continuum.
One of the aspects of the recent work from Toren et al. that we find most notable and exciting is its origins from a successful and relatively rare partnership between a public health agency (Seattle & King County HIV/STD Program) and an academic epidemiology program (University of Washington). Other public health agencies have reported difficulty analyzing and disseminating data due to lack of resources.8 Yet, if public health surveillance data are not available to inform interventions, a major purported goal of surveillance is controverted.12 Furthermore, when the important work of public health agencies is not presented in peer-reviewed literature, its impact on future public health practice and academic research may be stunted. By partnering with academic researchers, public health practitioners stand to benefit from the additional expertise, time, and resources that could be brought to the analysis of their surveillance data. Academics have a responsibility to address on-the-ground public health questions13 and stand to learn from addressing the methodological challenges posed in analyses of public health surveillance data.
Public health–academic epidemiology partnerships should serve to incorporate modern epidemiological methods into analyses of surveillance data. Specifically, although the analysis by Toren et al.5 represents the state of the art for public health surveillance, analyses of surveillance data could be improved with (relatively standard) epidemiological methods for the analysis of cohort data. First, longitudinal studies of HIV surveillance data should control for potentially informative censoring.14 In the study by Toren et al.,5 some HIV-diagnosed persons moved out of Seattle & King County. The authors were able to censor persons when they moved out of jurisdiction. However, if emigration was related to time to viral suppression, the reported estimates may be biased.15,16 Second, longitudinal studies of outcomes that require observation should adjust for the observation plan.17 As the authors acknowledge, their estimates of time to viral suppression are likely overestimates because viral suppression cannot be recognized until viral load is measured. However, if viral load monitoring was more frequent in recent years, time to viral suppression would seem to decrease, even if actual time to viral suppression did not change. Although viral load monitoring patterns are unlikely to explain the dramatic results presented in this analysis, future improvements in time to viral suppression will likely be more subtle, and the influence of observation patterns may play a bigger role. In addition, comparisons of time to viral suppression between groups may be biased if access to care (and laboratory testing) differs between the groups. Third, public health surveillance data almost always suffer from missing data and analyses of public health surveillance data should using missing data methods.18,19 Toren et al. made a decision to exclude persons missing baseline CD4 cell count from multivariable analyses and showed that this decision did not impact results in a sensitivity analysis that included everyone. However, CD4 cell count is almost certainly not missing completely at random, and thus, we expect that the complete cohort analysis is not unbiased in expectation.19 In this instance, the sensitivity analysis showed little difference because the proportion of missing data was small, and thus, the amount of bias was small; nevertheless, the least biased estimate in this article was likely the one presented in the sensitivity analysis. Finally, results from Seattle & King County may not be generalizable to the rest of the country, and furthermore, generalizability may not be the most important metric on which to judge these results. Although the authors find their results comparable to those from New York City20 and San Francisco,21 it remains unknown whether similar trends would be seen in the rest of the country, particularly in the Southeast, where most new HIV cases are being diagnosed.22 Examining the HIV continuum according to structural, demographic, and geographical differences is important for identifying groups and areas in need of further intervention. Thus, we should not look for generalizable estimates, but rather, we should seek out heterogeneity. Because most successful public health and academic partnerships exist in areas with strong public health infrastructure and proportionally more resources, we are lacking good information on the HIV care continuum in some of the most disproportionately impacted populations.
In conclusion, we applaud the work of the Seattle & King County HIV/STD Program and researchers from the University of Washington. Public health surveillance data can provide population-based estimates of progress toward the NHAS 2020 goals.23 Identification of heterogeneity in progress toward these goals by region, demographic, and clinical characteristics will help identify groups in particular need of public health intervention. In estimating the distribution and movement of HIV-diagnosed persons across stages of the HIV care continuum and in monitoring improvements in the continuum, however, epidemiological methods for more traditional cohort analyses should be applied to guard against biases from loss to follow-up, differential observation of cohort members, and missing data. We hope this partnership and others like it inspire public health and academic partnerships in other geographic regions, and the publication of quality HIV care continuum estimates arrived at using rigorous epidemiological analyses.
1. Greenberg AE, Hader SL, Masur H, et al. Fighting HIV/AIDS in Washington, D.C. Health Aff (Millwood) 2009; 28: 1677–1687.
2. Gardner EM, McLees MP, Steiner JF, et al. The spectrum of engagement in HIV care and its relevance to test-and-treat strategies for prevention of HIV infection. Clin Infect Dis 2011; 52: 793–800.
3. National HIV/AIDS Strategy for the United States. Office of National AIDS Policy; 2010.
4. Lesko CR, Sampson LA, Miller WC, et al. Measuring the HIV care continuum using public health surveillance data in the United States. J Acquir Immune Defic Syndr 2015; 70: 489–94.
5. Toren KG, Buskin SE, Dombrowski JC, et al. Time from HIV diagnosis to viral load suppression: 2007–2013. Sex Transm Dis 2015.
6. Edwards JK, Cole SR, Westreich D, et al. Loss to clinic and five-year mortality among HIV-infected antiretroviral therapy initiators. PLoS One 2014; 9: e102305.
7. Buskin SE, Kent JB, Dombrowski JC, et al. Migration distorts surveillance estimates of engagement in care: Results of public health investigations of persons who appear to be out of hiv care. Sex Transm Dis 2014; 41: 35–40.
8. National Assessment of HIV/AIDS Surveillance Capacity. Council of State and Territorial Epidemiologists; 2009.
9. Centers for Disease Control and Prevention. Monitoring selected national HIV prevention and care objectives by using HIV surveillance data—United States and 6 dependent areas—2011. HIV Surveill Suppl Rep 2013; 18.
10. Centers for Disease Control and Prevention. Monitoring selected national HIV prevention and care objectives by using HIV surveillance data—United States and 6 dependent areas—2012. HIV Surveill Suppl Rep 2014; 19.
11. Centers for Disease Control and Prevention. Monitoring selected national HIV prevention and care objectives by using HIV surveillance data—United States and 6 dependent areas—2013. HIV Surveill Suppl Rep 2015; 20.
12. Thacker SB, Berkelman RL. Public health surveillance in the United States. Epidemiol Rev 1988; 10: 164–190.
13. Galea S. An argument for a consequentialist epidemiology. Am J Epidemiol 2013; 178: 1185–1191.
14. Robins JM, Finkelstein DM. Correcting for noncompliance and dependent censoring in an AIDS Clinical Trial with inverse probability of censoring weighted (IPCW) log-rank tests. Biometrics 2000; 56: 779–788.
15. Hernán MA, Hernández-Díaz S, Robins JM. A structural approach to selection bias. Epidemiology 2004; 15: 615–625.
16. Westreich D. Berkson's bias, selection bias, and missing data. Epidemiology 2012; 23: 159–164.
17. Hernán MA, McAdams M, McGrath N, et al. Observation plans in longitudinal studies with time-varying treatments. Stat Methods Med Res 2009; 18: 27–52.
18. Rubin DB. Multiple imputation after 18+ years. J Am Stat Assoc 1996; 91: 473–489.
19. Greenland S, Finkle WD. A critical look at methods for handling missing covariates in epidemiologic regression analyses. Am J Epidemiol 1995; 142: 1255–1264.
20. Torian LV, Xia Q. Achievement and maintenance of viral suppression in persons newly diagnosed with HIV, New York City, 2006–2009: Using population surveillance data to measure the treatment part of “test and treat”. J Acquir Immune Defic Syndr 2013; 63: 379–386.
21. Das M. Reducing Community Viral Load to Achieve HIV Prevention. Available at: http://www.iapac.org/AdherenceConference/presentations/ADH7_Invited_Das.pdf
. Accessed October 20, 2015.
22. Diagnoses of HIV Infection in the United States and Dependent Areas, 2013. HIV Surveill Rep 2015; 25. Available at: http://www.cdc.gov/hiv/library/reports/surveillance/2013/surveillance_Report_vol_25.html
. Accessed 20 October 2015.
23. National HIV/AIDS Strategy for the United States: Updated to 2020 Office of National AIDS Policy; 2015.