Klein, Pamela W. MSPH, PhD*†; Messer, Lynne C. PhD‡; Myers, Evan R. MD, MPH§; Weber, David J. MD, MPH*; Leone, Peter A. MD¶; Miller, William C. MD, PhD, MPH†
In the United States, approximately 20% of people infected with HIV are unaware of their HIV-infected status; disease transmission from these individuals accounts for 50% of new HIV infections.1,2 Effective HIV testing programs are essential to identify HIV-infected persons and enroll them in medical care, thereby slowing disease progression and reducing further HIV transmission.3,4 In 2006, the Centers for Disease Control and Prevention (CDC) recommended routine, opt-out HIV testing in clinical settings.5 From 2007 through 2010, testing programs performed 2.8 million CDC-funded HIV tests and identified more than 18,000 new HIV-infected cases.6 However, these cases represent only a small fraction of the approximately 150,000 new HIV infections acquired over the same period.7
Routine, opt-out HIV testing can be feasible to implement and acceptable to both patients and providers.8,9 Although the number of HIV tests performed increases with the introduction of an expanded HIV testing program, the impact on the identification of new HIV-infected cases has been inconclusive. Although some expanded HIV testing programs showed an increase in case detection, others showed a decrease or no change.9–16 These programs have been limited by small numbers and a focus on clinical settings with minimal preimplementation HIV testing.
We conducted a statewide, before-after analysis of a routine, opt-out expanded HIV testing program in all 102 North Carolina sexually transmitted disease (STD) clinics. North Carolina, like many southeastern states, bears a large burden of HIV infection and STDs.17 The program’s impact was measured by the number of HIV tests performed and new detection of HIV-infected cases. We aimed to determine the incremental impact of an expanded HIV testing program in a clinical setting with a high baseline level of HIV testing.
MATERIALS AND METHODS
Study Population and Setting
This study included all patients aged 18 to 64 years who were tested for HIV in North Carolina’s 102 county-level STD clinics from July 1, 2005, through June 30, 2011. Non–North Carolina residents and patients lacking an HIV test result were excluded from analysis (n = 1149 of 414,015 HIV tests).
Patients who agreed to HIV testing had blood samples drawn and, along with a form with demographic information, processed at the North Carolina State Laboratory for Public Health (SLPH). At the SLPH, the samples were tested for HIV antibodies using a third-generation enzyme immunoassay; all reactive samples were confirmed via Western blot. Enzyme immunoassay–negative samples were pooled for acute HIV testing by polymerase chain reaction for viral RNA. Test results and demographic information were entered into the SLPH HIV testing database; results were provided to the patient in a follow-up STD clinic visit. If no prior HIV-positive record existed for a patient, the patient was entered into the state HIV surveillance database, the electronic HIV/AIDS reporting system (eHARS), as a new HIV-infected case. Patient-level data were collected by linking the SLPH and eHARS electronic surveillance databases by a unique HIV testing identifier.
Intervention: Expanded HIV Testing Program
The North Carolina Expanded HIV Testing Program was introduced in November 2007, focusing on routine, opt-out HIV testing in clinical settings, regardless of patient risk profile or HIV testing history. Because of the high-risk patient population, HIV testing was already common in STD clinics.18 In this preintervention period, opt-in, risk-based HIV testing was performed, with a focus on patients with sexual exposure to HIV, men who had sex with men, or no recent history of HIV testing. The opt-out, routing HIV testing intervention was disseminated and sustained through webinars, lectures, notices to health departments, contract addendums, and statewide conferences attended by STD clinic and health department employees.
The intervention was implemented on November 1, 2007. We assumed a lag period of 3 months from the start date of the intervention to full implementation. Therefore, in regression analyses, persons tested for HIV before November 1, 2007, were considered “unexposed” to the expanded HIV testing program; persons tested for HIV after February 1, 2008, were considered “exposed.” This lag period was varied from 0 to 6 months in sensitivity analyses.
Two primary outcomes were evaluated: HIV testing and the new detection of HIV-infected cases. HIV testing was measured as the number of HIV tests performed and as the HIV testing rate per 100,000 persons, based on annual intercensal population estimates.19 Case detection was measured as both the number of new HIV-infected cases and HIV positivity per 1000 HIV tests. A new case of HIV infection was defined as a patient with a positive HIV test result in the same calendar month as the person’s diagnosis date in eHARS. This window period accounted for possible reporting delays and uncertainty regarding patients who lacked an exact date of HIV diagnosis; these patients were assigned to the 15th day of their diagnosis month. This approximation recoded 69 HIV-infected cases as newly diagnosed, when they otherwise would have been considered a previous diagnosis (3.3% of patients with a positive test result, 0.009% of the total study population).
Patient demographics at the time of HIV testing were abstracted from the SLPH database and included sex (male, female), race/ethnicity (non-Hispanic white, non-Hispanic black, Hispanic, other), and age. Sexually transmitted disease clinics were categorized by population density (<199, 200–399, 400–599, ≥600 persons per square mile), metropolitan (MeSA)/micropolitan (MiSA) statistical areas (MeSA, MiSA, neither), and proportion of the county living below the poverty line (<15%, 15%–19.9%, 20%–24.9%, ≥25%).20 A baseline HIV rate was calculated as the average number of reported HIV cases per 100,000 from 2005 through 2007 (categorized for analysis as 0–4.9, 5–9.9, 10–14.9, ≥15).21 A dichotomous variable identified the presence or absence of an in-house HIV clinic; STD clinics with an in-house HIV clinic were Durham, Mecklenberg, and Wake Counties.
Descriptive analyses were used to examine trends in HIV testing and case detection before and after the introduction of the intervention. Multivariate regression analyses were conducted with 2 distinct approaches: interrupted time series analyses and multilevel modeling. Given the small proportion of study participants missing individual-level covariate information (age, sex, race/ethnicity; n = 9961; 2.4% of tests with a valid HIV test result), a complete case analysis was conducted. An individual HIV test, not patient, was the unit of analysis. The final analysis cohort included 402,774 unique HIV tests performed over more than 72 monthly time points.
Interrupted time series methods, specifically autoregressive integrated moving average models, were applied to serial monthly cross sections of HIV testing data. Because HIV testing in 1 month is dependent on testing in the prior month and influences testing in subsequent months, we used autoregressive integrated moving average models to account for underlying temporal correlation between parameter estimates and their residual errors. This method describes the trend (slope) of an outcome over time and how this trend changes with the introduction of an intervention. We identified parameters representing the (a) preintervention intercept, (b) preintervention slope, (c) overall postintervention slope, and (d) change in slope attributable to the intervention. Mean differences (MDs) and corresponding 95% confidence intervals (95% CIs) were calculated for all patients and stratified by patient- and clinic-level characteristics.
To evaluate the overall association between the intervention and the rate of HIV testing per 100,000 population, we used Poisson regression to calculate rate ratios (RRs) and 95% CIs. The rate denominator was created from annual intercensal population estimates; STD clinic-specific denominator data were not available. To broadly adjust for time trends, models were also adjusted for calendar year.
Fixed-slope random-intercept multilevel regression models were used to evaluate the intervention’s impact on HIV case detection, while accounting for clustering by STD clinic. Intercepts were varied to accommodate differential HIV risks by county. In sensitivity analyses, we compared the random-intercept models to models without county-level clustering. Because HIV positivity is a rare outcome, we used logistic regression to calculate odds ratios (ORs) and corresponding 95% CIs.
With an externally determined time point as the demarcation between the exposed and unexposed periods, measured covariates were not associated with the “exposure” (intervention) and could not be confounders. To address potential differences in the covariate distributions over time, the multilevel model was adjusted for patient- and clinic-level characteristics (Table 1). An indicator for calendar year was added to broadly account for underlying time trends.
All analyses were conducted using SAS version 9.2 (SAS Institute, Cary, NC).22 This study was approved by the institutional review board of the University of North Carolina at Chapel Hill.
Preintervention, 128,029 HIV tests were performed, of which 426 (0.33%) were new HIV-infected cases. In the postintervention period, 274,745 HIV tests were performed, detecting 816 (0.30%) new HIV-infected cases (Table 1).
More than half of the tested patients were female, although more female patients were tested in the postintervention phase (51.8% vs. 54.9%). The proportion of non-Hispanic black patients increased from 53.18% to 58.40%, whereas the proportion of non-Hispanic white, Hispanic, and other race/ethnicity decreased. No changes in the age distribution of patients receiving an HIV test or in clinic-level characteristics were observed between the preintervention and postintervention periods.
Number of HIV Tests Performed
In July 2005, the baseline number of HIV tests performed per month was 3832. Before the intervention, the number of HIV tests performed per month increased at a rate of 55 tests per month (95% CI, 41–72), or an increase of 0.81 tests per 100,000 persons per month (Fig. 1A, B). Postintervention, the monthly increase in the number of tests slowed to 34 tests per month (95% CI, 26–42), or an increase of 0.46 tests per 100,000 persons per month. Compared with the monthly rate of HIV testing predicted in the absence of the intervention, the monthly rate of HIV testing attributable to the intervention decreased by 20 tests per month (95% CI, −37 to −5) or −0.35 tests per 100,000 persons per month.
This overall trend in HIV testing was driven by specific demographic subpopulations (Table 2). Decreases in the rate of HIV testing per 100,000 population per month attributable to the intervention were observed among men (MD, −0.45; 95% CI, −0.70 to −0.21), non-Hispanic blacks (MD, −1.57; 95% CI, −2.34 to −0.80), Hispanics (MD, −1.55; 95% CI, −2.19 to −0.92), and patients in the youngest age categories (18–24 years; MD, −1.34 [95% CI, −2.07 to −0.61]; 25–34 years: MD, −0.54 [95% CI, −1.03 to −0.05]). Decreases in the rate of HIV testing per month attributable to the intervention were also pronounced in clinics located in counties of high population density (MD, −0.63; 95% CI, −1.0 to −0.25) and high baseline HIV case rates (MD, −0.74; 95% CI, −1.1 to −0.40).
Unadjusted Poisson models identified an increase in the rate of HIV testing associated with the intervention (RR, 1.33; 95% CI, 1.32–1.34). However, after adjustment for calendar year, the association was inverted (RR, 0.88; 95% CI, 0.85–0.91) and in agreement with the interrupted time series results.
New HIV-Infected Cases
The baseline number of new HIV-infected cases detected was 13.82 per month (95% CI, 10.82–16.82), or 3.59 cases per 1000 HIV tests per month (95% CI, 3.05–4.12; Fig. 1C, D). Little temporal trend in HIV positivity per 1000 tests per month was observed in either the preintervention or postintervention periods (preintervention MD, −0.02 [95% CI, −0.05 to 0.02]; postintervention MD, 0.00 [95% CI, −0.02 to 0.02]).
Despite the lack of a significant trend in HIV positivity, the expanded HIV testing program did slightly mitigate the negative slope observed before the intervention (MD, 0.01; 95% CI, −0.02 to 0.05; Table 3). This mitigation was driven by increases in monthly case detection rates attributable to the intervention among women (MD, 0.03; 95% CI, 0.01–0.07) and non-Hispanic black patients (MD, 0.05; 95% CI, 0.00–0.10). Slight increases in the rate of case detection per month attributable to the intervention were also observed in clinics without an in-house HIV clinic (MD, 0.03; 95% CI, −0.02 to 0.07) and in counties with moderate levels of poverty and high baseline rates of HIV.
Based on the unadjusted multilevel regression model, the introduction of the expanded HIV testing program was associated with a 0.11% reduction in HIV positivity (OR, 0.89; 95% CI, 0.79–1.00; Table 4). The inclusion of patient-level covariates slightly attenuated this association but did not alter precision (OR, 0.93; 95% CI, 0.82–1.05). Adjustment of the multilevel model for calendar year attenuated the observed association completely to the null (OR, 1.02; 95% CI, 0.69–1.52) and adversely effected precision.
Despite the CDC’s recommendation for routine, opt-out HIV testing in clinical settings, the impact of expanded HIV testing programs is unclear.5,9–16 We evaluated HIV testing and case detection of a routine, opt-out HIV testing program in North Carolina STD clinics using a before-after intervention analysis. Because of a consistent increase in HIV testing before the intervention, the incremental impact of the expanded HIV testing program was minimal. This preintervention increase in testing can likely be attributed to an emphasis on integrated HIV and STD prevention by the North Carolina Division of Public Health since the early 2000s. However, changes to the North Carolina Administrative Code were necessary to allow for implementation of routine, opt-out HIV testing in clinical settings.
In the postintervention phase, the monthly rate of HIV testing increased, but at a slower rate than before the intervention. This attenuation was driven primarily by a decreased rate of HIV testing attributable to the intervention among patients regularly targeted for testing (men, non-Hispanic blacks, Hispanics, younger patients) but an increased rate of testing attributable to the intervention in populations not traditionally considered at high risk for HIV (women, non-Hispanic whites).7,23
Although the change in HIV testing rates attributable to the intervention among traditionally high-risk patients decreased, the overall rate of HIV testing per month continued to increase. HIV testing is an outcome bounded by the size and capacity of the STD clinic and cannot increase infinitely. By expanding HIV testing services, we believe that the intervention will eventually allow for a higher maximum level of HIV testing to be reached than would have been observed without the intervention.
Among Hispanics, the overall postintervention rate of HIV testing decreased. This result is concerning: in the Hispanic community, HIV prevalence is high and many barriers complicate HIV prevention.24,25 However, underlying changes in the migrant Hispanic population due to poor employment in the economic downturn in 2008, which coincided with the postintervention period, could explain this result. If the overall population of migrant Latino workers decreased, they would be removed disproportionately from the numerator of STD clinic clients but not from the intercensal population-based denominator, which could artificially decrease HIV testing rates.
Because the greatest increase in HIV testing was among persons at lower risk for HIV acquisition, incremental increases in case detection were minimal. This minimal impact indicates that providers were already successfully identifying HIV-infected persons without the intervention. Increases in case detection rates attributable to the intervention were observed in populations with increased HIV testing (women) and populations that reflect HIV epidemic trends in North Carolina (non-Hispanic blacks).
The small magnitude of the increase in HIV testing is consistent with evaluations of HIV testing programs in other settings with high baseline levels of HIV testing. In a Denver STD clinic, HIV testing increased 1.2% because 79% of patients tested for syphilis were already being tested for HIV before the intervention.10 Expectations of an HIV testing intervention’s magnitude should be tempered by the limits of the setting, which can be dictated by preexisting HIV testing and case detection levels. In contrast to the STD clinic setting, an opt-out HIV testing program in a North Carolina emergency department with low preintervention levels of HIV testing during this same period resulted in a 173% increase in HIV testing.26
The impact of expanded HIV testing programs on case detection is inconclusive. Interventions have led to both increases and decreases in case detection.9–16 By examining the trajectory of case detection for more than 2 years before the intervention, we were able to detect a declining preintervention trend. This decline was followed by a steady rate of case detection during the postintervention phase, driven by increased diagnoses in certain population groups.
Nearly all extant evaluations of HIV testing programs reduced the preintervention level of HIV testing and case detection to a cross-sectional measure. A program in San Francisco was evaluated with a dynamic preintervention comparison but was implemented in an urban setting with a low level of preintervention HIV testing and lacked generalizability.16 A static measure of baseline HIV testing would not adequately capture preexisting trends in HIV testing or case detection. In our evaluation, using a cross-sectional or aggregate measure of HIV testing without adjusting for calendar year overestimated the impact of an HIV testing intervention. An aggregate measure of case detection underestimated the impact of the intervention, even showing a spurious negative association.
Interrupted time series and multilevel regression analyses answer complementary questions. Interrupted time series analysis addresses the change in the rate of an outcome over time and is an ecologic method; the unit of analysis is the cross-sectional calendar month. Although we urge caution in the overinterpretation of ecologic analyses, the agreement between the interrupted time series and multilevel regression models including calendar year strengthens our confidence in the interrupted time series results. We could not directly account for unmeasured covariates such as changing perceptions of HIV, HIV-related stigma, and shifting disease dynamics. However, our study’s “quasi-experimental” design should account for many unmeasured covariates.
The use of routinely collected public health surveillance data allowed us to evaluate this intervention throughout North Carolina. This rich data source led to a larger study population than would have been feasible in a standard research environment or if analyses were restricted to a single clinical facility. However, surveillance data are not collected for research purposes and the completeness and accuracy of records and data elements cannot be verified.
Despite the disproportionately high burden of HIV in the southeastern United States, this study is the first to evaluate an expanded HIV testing program in the region using a comparison group.27–30 HIV prevention interventions in the South face unique challenges due to the high rates of comorbid conditions, socioeconomic disparities, and a stark contrast between urban and rural areas, which contribute to HIV-related stigma and difficulty accessing HIV medical care.31 County-level STD clinics play a crucial role in HIV prevention; within the North Carolina SLPH, STD clinics account for 36% of HIV tests and nearly 50% of new HIV diagnoses.25
In North Carolina STD clinics, the introduction of a routine, opt-out expanded HIV testing program did not significantly alter the trajectory of HIV testing or case detection. Given the bounded nature of these outcomes, these results are not surprising. We believe that, due to the increased population eligible for HIV testing, this intervention allowed for the HIV testing saturation point to settle at higher level than would be observed without the intervention. We also identified slight increases in case detection that mitigated a preintervention decline in identification of new HIV-infected cases. Because HIV testing of the highest-risk populations was already very successful in the STD clinics, the incremental impact of expanding testing to lower-prevalence populations was marginal.
1. Hall HI, Holtgrave DR, Maulsby C. HIV transmission rates from persons living with HIV who are aware and unaware of their infection. AIDS 2012; 26: 893–896.
2. Gardner EM, McLees MP, Steiner JF, et al. The spectrum of engagement in HIV care and its relevance to test-and-treat strategies for prevention of HIV infection. Clin Infect Dis 2011; 52: 793–800.
3. Castilla J, Del Romero J, Hernando V, et al. Effectiveness of highly active antiretroviral therapy in reducing heterosexual transmission of HIV. J Acquir Immune Defic Syndr 2005; 40: 96–101.
4. Hogg RS, Heath KV, Yip B, et al. Improved survival among HIV-infected individuals following initiation of antiretroviral therapy. JAMA 1998; 279: 450–454.
5. Branson BM, Handsfield HH, Lampe MA, et al. Revised recommendations for HIV testing of adults, adolescents, and pregnant women in health-care settings. MMWR Recomm Rep 2006; 55: 1–17; quiz CE11-14.
6. Results of the expanded hiv testing initiative—25 jurisdictions, United States, 2007–2010. MMWR Morb Mortal Wkly Rep 2011; 60: 805–810.
7. Prejean J, Song R, Hernandez A, et al. Estimated HIV incidence in the United States, 2006–2009. PLoS One 2011; 6: e17502.
8. White DA, Scribner AN, Martin ME, et al. A comparison of patient satisfaction with emergency department opt-in and opt-out rapid HIV screening. AIDS Res Treat 2012; 2012: 904–916.
9. Haukoos JS, Hopkins E, Conroy AA, et al. Routine opt-out rapid HIV screening and detection of HIV infection in emergency department patients. JAMA 2010; 304: 284–292.
10. Brooks L, Rietmeijer CA, McEwen D, et al. Normalizing HIV testing in a busy urban sexually transmitted infections clinic. Sex Transm Dis 2009; 36: 127–128.
11. Das-Douglas M, Zetola NM, Klausner JD, et al. Written informed consent and HIV testing rates: The San Francisco experience. Am J Public Health 2008; 98: 1544–1545.
12. Goetz MB, Hoang T, Bowman C, et al. A system-wide intervention to improve HIV testing in the Veterans Health Administration. J Gen Intern Med 2008; 23: 1200–1207.
13. Nayak SU, Welch ML, Kan VL. Greater HIV testing after Veterans Health Administration policy change: The experience from a VA Medical Center in a high HIV prevalence area. J Acquir Immune Defic Syndr 2012; 60: 165–168.
14. West-Ojo T, Samala R, Griffin A, et al. Expanded HIV testing and trend in diagnoses of HIV infection—District of Columbia, 2004–2008. Morbid Mortal Wkly Rep 2010; 59: 737–741.
15. White DA, Scribner AN, Vahidnia F, et al. HIV screening in an urban emergency department: comparison of screening using an opt-in versus an opt-out approach. Ann Emerg Med 2011; 58 (1 suppl 1): S89–S95.
16. Zetola NM, Grijalva CG, Gertler S, et al. Simplifying consent for HIV testing is associated with an increase in HIV testing and case detection in highest risk groups, San Francisco January 2003–June 2007. Plos One 2008; 3: e2591.
17. Prejean J, Tang T, Irene Hall H. HIV diagnoses and prevalence in the southern region of the United States, 2007–2010. J Community Health 2012; 38: 414–426.
18. Mayer KH. Sexually transmitted diseases in men who have sex with men. Clin Infect Dis 2011; 53 (suppl 3): S79–S83.
19. Intercensal Population Estimates. U.S. Census Bureau.
21. North Carolina 2007 HIV/STD Surveillance Report. Raleigh, NC: North Carolina Division of Public Health, HIV/STD Prevention & Care Branch; 2008.
22. SAS [computer program]. Version 9.2. Cary, NC: SAS Institute, Inc.; 2008.
23. Lansky A, Brooks JT, DiNenno E, et al. Epidemiology of HIV in the United States. J Acquir Immune Defic Syndr 2010; 55 (suppl 2): S64–S68.
24. Painter TM. Connecting the dots: When the risks of HIV/STD infection appear high but the burden of infection is not known—The case of male Latino migrants in the southern United States. AIDS Behav 2008; 12: 213–226.
25. North Carolina Epidemiologic Profile for HIV/STD Prevention & Care Planning. State of North Carolina Department of Health & Human Services, Division of Public Health, Epidemiology Section, Communicable Disease Branch; 2011.
26. Hoots BE, Klein PW, Martin IBK, et al. Implementation of a collaborative HIV testing model between an emergency department and infectious disease clinic. J Acquir Immune Defic Syndr. 2014 March 26 (Epub ahead of print).
27. Sattin RW, Wilde JA, Freeman AE, et al. Rapid HIV testing in a southeastern emergency department serving a semiurban-semirural adolescent and adult population. Ann Emerg Med 2011; 58 (1 suppl 1): S60–S64.
28. Weis KE, Liese AD, Hussey J, et al. A routine HIV screening program in a South Carolina community health center in an area of low HIV prevalence. AIDS Patient Care STDS 2009; 23: 251–258.
29. Copeland B, Shah B, Wheatley M, et al. Diagnosing HIV in men who have sex with men: An emergency department’s experience. AIDS Patient Care STDS 2012; 26: 202–207.
30. MacGowan R, Margolis A, Richardson-Moore A, et al. Voluntary rapid human immunodeficiency virus (HIV) testing in jails. Sex Transm Dis 2009; 36: S9–S13.
31. Reif S, Geonnotti KL, Whetten K. HIV infection and AIDS in the deep south. Am J Public Health 2006; 96: 970–973.