Assessing the impact of interventions individually and in combination with other interventions is an important part of developing and implementing an optimal mix of activities for sexually transmitted disease (STD) prevention.1,2 Identifying outcomes of interest is one of the first steps and designing evaluations to determine the impact of the interventions on those outcomes follows.3 However, quantifying the marginal impact of interventions can be challenging, given the multiple confounding factors that can be present due to other programs, changes in the epidemiological context, and population-level trends in mixing or risk behavior that may be unrecognized or unquantified.1,4–6 We identify some of the issues that may complicate the estimation of intervention impact and that may affect the impact that interventions have on the outcomes of interest to STD prevention programs.
Much of what we know about program and intervention effectiveness comes from research studies. Randomized controlled trials (RCTs) are widely considered to be the gold standard in determining intervention effect,7 although quality observational studies often yield results that are generally comparable.8 Randomized controlled trials by design focus on limited controlled differences in interventions to best assess the impact of the interventions on the outcome or outcomes of interest. In observational studies, the investigators have less control but attempt to collect data sufficient to analyze the impact of the interventions being analyzed.3 With both types of studies, key issues for programmatic translation include scalability and the ability to target the interventions to relevant populations.4,9
Modeling studies also contribute to the body of knowledge around program effectiveness, but modeling studies typically draw data for model inputs from RCTs, observational studies, and other literature. Well-designed modeling studies include sensitivity analyses; these often vary a small group of parameters individually or together, or vary larger sets of parameters randomly, such as with Monte Carlo simulation (or a modification, such as Latin hypercube sampling).10–12 Whether limited or comprehensive, sensitivity analyses generally vary parameters over their ranges. If some of the parameters influence others in ways that are not explicitly accounted for in the model, sensitivity analyses relying on random variation may lead to misleading conclusions about the range of outcomes likely to be associated with the program or intervention when implemented.
Although interventions are often studied in isolation, there may be crossover effects that impact their effectiveness. Interventions may have synergistic or antagonistic effects on each other. Interventions that are synergistic achieve a greater impact on outcomes when implemented together than the sum of the outcomes that would result from implementing either separately.13 Interventions that are antagonistic toward each other have less impact when combined than the sum of implementing them individually. These concepts can be defined mathematically, and interventions may meet the definition either additively, multiplicatively, or both.13,14 However, the effect of synergistic or antagonistic impacts can be intuitively understood. For example, condoms are recommended to prevent STD and HIV transmission. Although not often a formal public health intervention, seroadaptive behaviors such as serosorting or seropositioning are sometimes used by men who have sex with men to reduce HIV transmission risk.15,16 However, condom use is lower in men using seroadaptive practices.17–19 Therefore, seroadaption may have an antagonistic effect on the marginal impact of an intervention designed to increase condom usage. If seroadaptive behaviors are increasing over time, the marginal impact of a condom promotion intervention may diminish.20 The impact of seroadaptive behaviors may not just overlap with condoms; it may reduce the effectiveness of interventions designed to increase condom usage.
Synergies may be found with interventions that can achieve the same outcomes but that do not directly impact each other. For example, interventions designed to improve the HIV care continuum may be enhanced by promoting STD screening in persons who are HIV infected.21 Receipt of STD services provides another opportunity to maintain high levels of HIV care. Patients not in care may be motivated by an intervention to seek STD screening, at which time they can be reengaged in care. Sexually transmitted disease screening together with other HIV retention and linkage interventions may lead to more patients achieving viral suppression than the sum of what they could do if implemented separately.
The marginal impact of interventions may also be different from that which is expected if multiple interventions achieve partially redundant effects but do not directly influence each other. For example, antiretroviral therapy in HIV-infected patients can reduce HIV transmission. Preexposure prophylaxis in high-risk HIV-susceptible populations can also reduce HIV transmission. The impact of both together may be less than the impact of the sum of each if implemented individually because the impact of the 2 interventions will, to some degree, be redundant.22 The marginal impact of antiretroviral therapy as an HIV prevention tool drops as the proportion of HIV-negative persons using preexposure prophylaxis increases.
It has long been recognized that marginal intervention impact can also be influenced by other factors, such as epidemic phase.5,23,24 Early in an epidemic, broad-based screening may have little impact if the disease is concentrated in risk groups. Later, more broadly-focused interventions may have more impact.
There can also be a number of contextual factors that impact the overall effectiveness of interventions. These can include broad socioeconomic factors, such as changes in labor markets or in poverty rates, population shifts, or the effect of nonhealth structural interventions.4,25 As an example of the latter, consider alcohol taxes and their impact on STD rates.26 These factors may impact the same outcomes as the interventions of interest, yet be independent of them, or the factors may affect the marginal effectiveness of the interventions themselves. Because programs usually only have access to outcome data (in the form of case reports, clinic visits, or other similar measures) and may not even be able to fully ascertain all potentially relevant interventions impacting STD transmission, it can be difficult to ascertain how multiple factors are interacting.27
Patient-level factors can also influence the marginal impact of an intervention. An example is the impact of risk behavior counseling combined with STD screening. Brief and enhanced counseling interventions were found to be significantly more effective than didactic messaging at increasing 100% condom usage and reducing incident STDs in follow-up visits.28 However, the reduction in incident STDs was greatest among those diagnosed as having an STD at the study baseline.29 The marginal impact of the counseling intervention itself was affected by patients' disease status, which itself became known due to screening. Another study showed a difference in incident STDs among patients receiving rapid versus laboratory-based HIV testing coupled with counseling, suggesting that the marginal impact of the counseling intervention may have been different based on patient STD status at baseline.30 Additional patient-level contextual factors, such as the prevalence of risk behaviors or changes in mixing patterns, can impact intervention effectiveness; these can change over time and potentially modify the effectiveness of interventions in the same population over time.31
Consider a population where 2 interventions are contemplated: one is aimed at reducing partner numbers and the other is intended to increase condom usage. If the intervention that reduces partner numbers would reduce chlamydia incidence in a given location by 1000 cases per year, the intervention increasing condom usage would reduce chlamydia incidence 500 cases per year, and both if implemented together would reduce chlamydia incidence by 2000 cases per year, the interventions are synergistic: together they achieve a greater reduction than the sum of what the 2 can do when implemented separately. If the combined effect of the interventions was to reduce chlamydia incidence by 1100 cases per year, they would be antagonistic. Different factors could account for the synergistic or antagonistic effects. The interventions could be synergistic if the combination of interventions makes persons at risk for STDs more aware of their risk, leading to the greater adoption of safer sex practices than would occur if the interventions were not synergistic. They could be antagonistic if the interventions together achieved an impact that was less than what they might accomplish if the interventions were independent of each other. This effect might also be seen if both interventions targeted the same subpopulation of persons at risk for chlamydia infection, thus leading to a diminishing marginal impact as the subpopulation became saturated with prevention interventions, while leaving other at-risk persons relatively unexposed9; this has been modeled in examinations of optimal intervention mix for HIV in some settings.32
The precise mechanisms that lead to synergistic or antagonistic effects are not always known. The presence of adverse effects like antagonism does not necessarily mean that combinations of interventions should not be implemented together; although the combination may not achieve health outcomes equal to what might be expected looking at the independent effects of the interventions if implemented individually, the marginal impact of the antagonistic intervention or interventions may still be positive.
Given these factors, how can programs respond? Quality RCTs and, to some degree, observational studies remain the most reliable sources of evidence regarding the impact of interventions. They are the cornerstones on which the medical literature rests, particularly for clinical interventions.7,8 This overview of factors that may affect the marginal impact of interventions is not intended to dissuade programs from relying on published evidence or from trying new programs to reduce STD morbidity in their jurisdictions. However, programs should be aware that interventions may not achieve what they might expect based on published evidence. Randomized controlled trials are conducted in a manner, time, and context that may not apply to programs seeking to adopt the same intervention.27,33 Even if the manner of intervention delivery and contextual and other factors match those that prevailed during the RCT or observational study and if circumstances change, the intervention effectiveness may change, as well.5,23 If possible, researchers might enhance the usefulness of their RCT reports by including a discussion of contextual and programmatic considerations that may affect implementation of the interventions studied in the RCT.
Program evaluation remains an important tool in assessing the impact of programmatic activity.2,3 Evaluation methods for complex intervention mixes may be difficult to use and may require data that go well beyond case reporting.34 Assembling an inventory of interventions that might impact a given outcome can itself be challenging, and determining intervention impact may require data sharing that is logistically or legally difficult.35 However, comprehensive evaluations of program impact will provide the best indication of the marginal impact of evaluations. An obvious approach to increase the marginal effectiveness of programs or combinations of programs is to seek to use synergistic interventions to maximize health impact.1,36 An awareness of how interventions can interact and of how the epidemiological context can influence interventions both existing and proposed can provide additional insight into the marginal impact of a program's STD activities.
1. Blanchard JF, Aral SO. Emergent properties and structural patterns in sexually transmitted infection and HIV research. Sex Transm Infect 2010; 86( 3 suppl): iii4–iii9.
2. Carter MW. Program evaluation for STD programs: In support of effective interventions. Sex 2015; 42. In press.
3. Milstein RL, Wetterhall SF. Framework for program evaluation in public health. MMWR Recomm Rep 1999; 48: 1–40.
4. Frieden TR. A framework for public health action: The health impact pyramid. Am J Public Health 2010; 100: 590–595.
5. Wasserheit JN, Aral SO. The dynamic topology of sexually transmitted disease epidemics: Implications for prevention strategies. J Infect Dis 1996; 174( 2 suppl): S201–S213.
6. Victora CG, Black RE, Boerma T, et al. Measuring impact in the Millennium Development Goal era and beyond: A new approach to large-scale effectiveness evaluations. Lancet 2011; 377: 85–95.
7. Atkins D, Best D, Briss PA, et al. Grading quality of evidence and strength of recommendations. BMJ 2004; 328: 1490–1497.
8. Benson K, Hartz AJ. A comparison of observational studies and randomized, controlled trials. N Engl J Med 2000; 342: 1878–1886.
9. Aral SO, Lipshutz JA, Douglas JM Jr. Introduction. In: Aral SO, Lipshutz JA, Douglas JM Jr, eds. Behavioral Interventions for Prevention and Control of Sexually Transmitted Diseases. New York: Springer, 2007: ix–xix.
10. Doubilet P, Begg CB, Weinstein MC, et al. Probabilistic sensitivity analysis using Monte Carlo simulation. A practical approach. Med Decis Making 1985; 5: 157–177.
11. Blower SM, Dowlatabadi H. Sensitivity and uncertainty analysis of complex models of disease transmission: An HIV model, as an example. Int Stat Rev 1994; 62: 229–243.
12. Iman RL, Helton JC. An investigation of uncertainty and sensitivity analysis techniques for computer models. Risk Anal 1988; 8: 71–90.
13. Dodd PJ, White PJ, Garnett GP. Notions of synergy for combinations of interventions against infectious diseases in heterogenously mixing populations. Math Biosci 2010; 227: 94–104.
14. Kurth AE, Celum CL, Baeten JM, et al. Combination HIV prevention:Significance, challenges, and opportunities. Curr HIV/AIDS Rep 2011; 8: 62–72.
15. McFarland W, Chen Y-H, Nguyen B, et al. Behavior, intention, or chance? A longitudinal study of HIV seroadaptive behaviors, abstinence and condom use. AIDS Behav 2012; 16: 121–131.
17. Morris SR, Little SJ. MSM: Resurgent epidemics. Curr Opin HIV AIDS 2011; 6: 326–332.
18. Matser A, Heijman T, Geskus R, et al. Perceived HIV status is a key determinant of unprotected anal intercourse within partnerships of men who have sex with men in Amsterdam. AIDS Behav 2014; 18: 2442–2456.
19. Prestage G, Brown G, Down IA, et al. “It's hard to know what is a risky or not a risky decision”: gay men's beliefs about risk during sex. AIDS Behav 2013; 17: 1352–1361.
20. Snowden JM, Wei CY, McFarland W, et al. Prevalence, correlates and trends in seroadaptive behaviours among men who have sex with men from serial cross-sectional surveillance in San Francisco, 2004–2011. Sex Transm Infect 2014; 90: 498–504.
21. Hallett TB, Eaton JW. A side door into care cascade for HIV-infected patients? J Acquir Immune Defic Syndr 2013; 63( 2 suppl): S228–S232.
22. Abbas UL, Glaubius R, Mubayi A, et al. Antiretroviral therapy and pre-exposure prophylaxis: Combined impact on HIV transmission and drug resistance in South Africa. J Infect Dis 2013; 208: 224–234.
23. Koopman JS, Simon CP, Riolo CP. When to control endemic infections by focusing on high-risk groups. Epidemiology 2005; 16: 621–627.
24. Dantes HG, Koopman JS, Addy CL, et al. Dengue epidemics on the Pacific coast of Mexico. Int J Epidemiol 1988; 17: 178–186.
25. Aral SO, Padian NS, Holmes KK. Advances in multilevel approaches to understanding the epidemiology and prevention of sexually transmitted infections and HIV: An overview. J Infect Dis 2005; 191( 1 suppl): S1–S6.
26. Chesson H, Kassler WJ, Harrison P. Sex under the influence: The effect of alcohol policy on sexually transmitted disease rates in the United States. J Law Econ 2000; 43: 215–238.
27. Aral SO, Cates W Jr. Coverage, context and targeted prevention: Optimising our impact. Sex Transm Infect 2013; 89: 336–340.
28. Kamb ML, Fishbein M, Douglas JM Jr, et al. Efficacy of risk-reduction counseling to prevent human immundeficiency virus and sexually transmitted diseases: A randomized controlled trial. JAMA 1995; 280( 13): 1161–1167.
29. Bolu O, Lindsey C, Kamb ML, et al. Is HIV/sexually transmitted disease prevention counseling effective among vulnerable populations? Sex Transm Dis 2004; 31: 469–474.
30. Metcalf CA, Douglas JM Jr, Malotte CK, et al. Relative efficacy of prevention counseling with rapid and standard HIV testing: A randomized, controlled trial (RESPECT-2). Sex Transm Dis 2005; 32: 130–138.
31. Adimora AA, Schoenbach VJ. Social determinants of sexual networks, partnership formation, and sexually transmitted infections. In: Aral SO, Fenton KA, Lipshutz JA, eds. The New Public Health and STD/HIV Prevention: Personal, Public and Health Systems Approaches. New York: Springer; 2013. Ch 2.
32. Beeharry G, Schwab N, Akhavan D, et al. Optimizing the Allocation of Resources Among HIV Prevention Interventions in Honduras. Report. 64. Washington: The World Bank, 2002.
33. Aral SO, Blanchard JF, Lipshutz J. STD/HIV prevention intervention: Efficacy, effectiveness and population impact. Sex Transm Infect 2008; 84( 2 suppl): ii1–ii13.
34. Gertler PJ, Martinez S, Premand P, et al. Impact Evaluation in Practice. The World Bank: Washington, 2011.
35. Gasner MR, Fuld J, Drobnik A, et al. Legal and policy barriers to sharing data between public health programs in New York City: A case study. Am J Public Health 2014; 104: 993–997.
36. Padian NS, McCoy SI, Balkus JE, et al. Weighing the gold in the gold standard: Challenges in HIV prevention research. AIDS 2010; 24: 621–635.