Epidemiology & Social
CDC HIV prevention indicators: monitoring and evaluating HIV prevention in the USA
Rugg, Deborah L; Heitgerd, Janet La; Cotton, David Ab; Broyles, Stephaniec; Freeman, Anned; Lopez-Gomez, Ana Mariae; Cotten-Oldenburg, Niki Uf; Page-Shafer, Kimberly*; and the HIV Prevention Indicators Field Collaborative
From the Global AIDS Activity/LIFE Initiative, Centers for Disease Control and Prevention, Atlanta, GA, athe Agency for Toxic Substances and Disease Registry, bMacro International, Atlanta, GA, cthe Louisiana Office of Public Health, New Orleans, LA, dthe University of Texas, Southwestern Medical Center, Dallas, TX, ethe Boston University School of Public Health, Boston, MA, fthe Minnesota Department of Health, Minneapolis, MN, gthe San Francisco Health Department, San Francisco, CA, USA. *See Appendix.
Received: 8 May 1999;
revised: 7 April 2000; accepted: 26 April 2000.
Sponsorship: Supported by grants (62/CCU- 614570, 114569, 513167, 613197, 913184) to each of the field sites and a contract to Macro International, Inc. (00-96-0598) from the Program Evaluation Research Branch (PERB), Division of HIV-AIDS Prevention Intervention Research and Support, National Center for HIV, STD, and TB Prevention, CDC.
Requests for reprints to: D. L. Rugg, Global AIDS Activity/LIFE Initiative, Mailstop E-07, National Center for HIV, STD and TB Prevention, Centers for Disease Control and Prevention, 1600 Clifton Rd., NE, Atlanta, GA 30333, USA.
Objective: This study selected and field tested indicators to track changes in HIV prevention effectiveness in the USA.
Methods: During 1996–1999, the Centers for Disease Control and Prevention held two 2 day expert consultations with more than 80 national, state and local experts. A consensus-driven, evidence-based approach was used to select 70 indicators, which had to be derived from existing data, available in more than 25 states, and meaningful to state health officials in monitoring HIV. A literature review was performed for each indicator to determine general relevance, validity, and reliability. Two field tests in five US sites determined accessibility, feasibility, and usefulness.
Results: The final 37 core indicators represent four categories: biological, behavioral, services, and socio-political. Specific indicators reflect the epidemic and associated risk factors for men who have sex with men, injection drug users, heterosexuals at high risk, and childbearing women.
Conclusions: Despite limitations, the indicators sparked the regular, proactive integration and review of monitoring data, facilitating a more effective use of data in HIV prevention community planning.
In the USA, as elsewhere, the need to monitor and evaluate the effects of prevention and control programs and policies is fundamental [1–3]. Fuelled by the 1995 US Government Performance Results Act, which requires federal agencies to develop performance measures to monitor program effectiveness, the focus on the use of indicators to monitor the results of prevention programs has been renewed. The goal of evaluating the effectiveness, as well as cost-effectiveness, of HIV prevention programs [4–9], presents new challenges due to rapid changes in the nature of HIV prevention and care in the USA. Monitoring and evaluation experiences from drug use prevention, health care reform, and mental health services [10–12], as well as global HIV prevention efforts [7,13] have helped inform the development of HIV prevention indicators for the USA. Monitoring indicators, when they are well conceived, conducted as part of a comprehensive evaluation plan, triangulated with other indicators, and interpreted in context, are a quick reference guide for program planners and policy-makers to use in assessing overall prevention effects in their jurisdiction .
Because national evaluations of the implementation of the HIV Prevention Community Planning Initiative are under way or have been completed [8,15,16], state and local officials expressed to the Centers for Disease Control and Prevention (CDC) a need to focus on an HIV prevention outcome evaluation strategy. Specifically they called for indicators that would help them better monitor the effects of overall HIV prevention efforts in their jurisdictions. Thus, in 1996, CDC initiated a project to develop a limited set of priority measures that could easily be used to describe how well a state or local jurisdiction was doing in terms of HIV prevention.
The development of a comprehensive HIV prevention evaluation framework was the first step. This framework, and a complete description of the evaluation activities designed to help describe the local context, is published elsewhere . The next step was to tailor the framework to this project and select, field-test, and disseminate core indicators to help monitor the vital signs of HIV prevention in health jurisdictions across the country.
Surveillance, evaluation, and monitoring
Because the areas of surveillance, monitoring, and evaluation are related but serve different purposes, it is important to distinguish their functions [8,13,17,18]. Basically, surveillance is the tracking of a disease or health problem and its associated risk factors in order to characterize an epidemic. It is the first stage of a public health response to an epidemic or health problem and continues for the duration of the problem.
Program evaluation is the assessment of the implementation of a specific program and/or determination of its effectiveness . Program effects can be determined immediately, in the short or intermediate term (we call these effects outcomes), and over the long term (we call these effects impact). Short term effects are typically monitored in behavioral determinants and behaviors. When intervention effects are of sufficient duration, intensity, and scope, the desired long term effects – reduction in the level of HIV transmission – can be predicted and meaningfully monitored.
We define monitoring as the ongoing proactive review and triangulation of a small set of core measures selected from surveillance data and other sources. These measures provide a general indication of how well a program is doing in terms of achieving its goals and objectives, and serve as a quick reference guide to program progress and success. This use is consistent with the Institute of Medicine's report  on assessing performance measures, the Prevention Indicators of the World Health Organization's Global Program on AIDS [13,20], and the recent efforts of UNAIDS and others to revise and define worldwide HIV prevention, care, and support indicators . Such measures are collected across types of data categories, over time, at the local, state, or national level to determine overall status and improve prevention efforts. This paper provides the rationale for the development of the HIV prevention indicators, describes the methods used to select indicators, presents the results, and discusses the implications of a national field test in five sites across the USA.
From October 1996 through September 1999, a consensus-driven, evidence-based phased approach was used to select and field-test a core set of indicators. These indicators were drawn from the best available data sources and represented behavioral and sociopolitical factors associated with HIV transmission, as well as the level of reported cases of HIV/AIDS and other sexually transmitted diseases (STD). Indicator development proceeded with the understanding that the indicators should: be relevant to monitoring the epidemic locally; be useful to state and local health departments in determining overall prevention effects; suggest areas for further in-depth evaluation; be relatively easily derived from ongoing local, state, or national data sources; and be based on widely attainable data (i.e. available in more than 25 states). Ultimately, monitoring indicators should be implemented nationwide and results reported to CDC so that a national picture might be derived.
Development of core indicators
In December 1996 and July 1997, CDC held 2 day meetings to define, prioritize, and further specify the core indicators that were selected to measure the priority risk factors associated with HIV transmission in the following subepidemics in the USA: men who have sex with men (MSM); heterosexuals at high risk; injecting drug users (IDU); and childbearing women who can transmit HIV perinatally. More than 80 experts in HIV prevention who attended the two sessions represented a broad range of scientists; federal, state, and city program managers; public policymakers; and community members. Participants were briefed about the relevant prevention framework, potential data sources, other similar efforts to develop indicators (e.g. by the World Health Organization) [7,13], and the project's purpose and progress. Depending on expertise and interest, the participants were divided into four working groups, one for each of the four subepidemics. Experts in these groups first specified the biological, behavioral, and sociopolitical factors they considered to be important aspects of a comprehensive approach to HIV prevention for their specific target population. Next they considered which of these factors were measurable by currently available data. They then framed associated indicators in general terms for each component. This list was then reduced using the following criteria: (i) indicator is relevant to HIV prevention; (ii) data source is widely available (i.e. in more than 25 states); (iii) data source is easily accessible; (iv) indicator data can be collected at the state or local level; (v) indicator has validity; (vi) indicator has reliability; (vii) indicator is useful to state or local health officials.
Of the more than 200 general indicators proposed, reviewed, and discussed at the two consensus meetings, approximately 70 received high ratings from the working groups. A large number of indicators were excluded because data sources were not available in more than 25 states (e.g. indicators measured by special studies in restricted locations or by data available only at the national level). Other core indicators were excluded because the working groups decided not to include measures of programmatic effort (e.g. resource allocation or other inputs), measures that are better suited to process evaluation . The excluded indicators were collected and categorized as supplemental indicators if available in at least one, but fewer than 25 states, or as potential future indicators if they were considered important but there was no data source for them.
A comprehensive literature review for each proposed indicator was then conducted, the proposed data sources examined, and two rounds of field testing carried out to further determine the strength of each indicator in relation to each of the seven criteria. The literature review was used to further determine the indicator's scientific relevance, defined as empirical evidence of the relationship between the indicator and HIV prevention or transmission, validity, and reliability. The data sources were examined to assess data availability, usefulness, and interpretability . Several indicators required the calculation of numerators and denominators. The specifications and protocols from the data sources used were examined to determine the relevance, validity and reliability for the purposes of HIV prevention monitoring indicators.
Between May 1997 and July 1999 two field tests were conducted of the indicators to determine accessibility and feasibility, and to further determine validity, reliability, interpretability, and usefulness of these indicators. Five health jurisdictions (Louisiana, Massachusetts, Minnesota, San Francisco, and Texas), representing different levels of HIV morbidity and types of subepidemics, participated in the field tests. A retrospective 5 year trend analysis was used to interpret the results and assess the implications for long term prevention planning and evaluation. Also, when possible, indicator breakdowns included sex (male, female), race/ethnicity (black, non-Hispanic; white, non-Hispanic; American Indian/Alaska Native; Asian/Pacific Islander; and Hispanic), and age (15–24, 25–34, 35–44, and 45+ years). These breakdowns were especially important in monitoring changes in local communities and among different risk groups.
Description of core HIV prevention indicators
The net result was a set of 37 core HIV prevention indicators: nine for MSM, eight for IDU, 14 for high risk heterosexuals, and six for childbearing women (Table 1). The number of indicators for each group was directly associated with the availability of data sources for each group. The indicators were categorized into four categories: biological (e.g. AIDS incidence, HIV detection, and STD incidence and prevalence), behavioral (e.g. the extent of high-risk sexual behaviors, condom use, and number of sex partners), service (e.g. syringe distribution and prenatal care), and sociopolitical (e.g. policies and laws).
There are many more biological core indicators than behavioral core indicators for IDU, MSM, and childbearing women. The lack of widely available standardized data sources for sexual and drug use behaviors, interpretable at the state or local level, reduced the number of indicators for the MSM and IDU.
Four core indicators were categorized as program services. Because these indicators measured high-priority intervention strategies (e.g. provision of zidovudine to HIV infected pregnant women) that have direct affects in the prevention of HIV transmission, they were retained throughout consensus development [22–25]. As important components of a comprehensive HIV prevention approach, these strategies were not being monitored sufficiently elsewhere and were included in the core indicators set.
Finally, the indicators for high-risk heterosexuals account for 40% of the total number of indicators. The availability of data from behavioral surveys of the general population contributed to this predominance.
Since 1981, CDC has been tracking the AIDS epidemic through nationwide HIV/AIDS surveillance systems. In 1985, when HIV antibody testing became available, states, at their discretion, began collecting and reporting to CDC data on confirmed HIV infection (33 states currently collect and report data on HIV infection). States also collect a limited amount of HIV-related behavioral information through the Youth Risk Behavioral Surveillance System and the Behavioral Risk Factor Surveillance System. States may request data on HIV seroprevalence and STD from Job Corps data maintained by CDC (see Web address in Table 2). Vital statistics, census data, and access to state and local laws and policies are available in all areas. These data sources are relatively stable data systems from which we were able to assess trends. One exception is the Survey of Child Bearing Women (SCBW), which was available nationally from 1988 through 1995, and now continues in only a few states. It is included as a data source because experts in the working group strongly argued that the SCBW is the ideal measure of HIV prevalence among childbearing women even though it continues in only a few states. In jurisdictions where the SCBW is not available, officials should seek local sources of comparable data.
Core HIV prevention indicators by risk group
Although each indicator for a subepidemic conveys important information about the epidemic, using the indicators as an integrated set of measures presents a more complete picture. Viewing the indicators as a set provides health officials with a foundation from which to make decisions about future prevention and evaluation activities that are needed in their jurisdiction.
There are nine biological indicators representing indirect measures of HIV transmission (indicators 1–5) and sexual risk behaviors (indicators 6–9) (Table 3). Selecting disease transmission indicators for all risk groups is difficult because of bias due to self-selection for testing and incomplete case reporting. For example the HIV detection indicator measures HIV testing and detection, not true HIV incidence. If case reporting is incomplete, then the HIV prevalence indicators are not true measures of prevalence. However, both are a subset of data routinely collected and used by states in constructing their epidemiologic profiles for the purposes of community planning. The working groups felt these indicators were the best estimates available for monitoring purposes for all risk groups, with the following caveats noted: AIDS incidence is actually an incidence rate; AIDS prevalence is actually a prevalence ratio; HIV incidence is difficult to determine and is better characterized as HIV detection since the numerator is the number of cases diagnosed and reported in a year, whereas incidence refers to the actual numbers of infections acquired in a year; HIV prevalence is actually the diagnosed HIV prevalence ratio (which may only represent two-thirds of actual prevalence); chlamydia, gonorrhea, and primary and secondary syphilis indicators are also dependent on diagnosis and reported rates, and technically are not true incidence.
Indicators 5 and 9 (Table 3) measure the prevalence of infection among economically disadvantaged out-of-school youth, who may be at increased risk for HIV infection compared with the general population . The other 5 indicators in this risk group measure high-risk behaviors among the general heterosexual adult and sexually-active youth populations.
Eight indicators were selected for IDU: four biological (IDU 1–4), one service (IDU 5), and three sociopolitical (IDU 6–8). The syringe distribution indicator is a proxy measure for the availability of sterile syringes and needles in a local jurisdiction (availability has been associated with HIV prevention among IDU)  (Table 4); it is a reflection of the political and social climate surrounding access to and the availability of sterile needles and syringes in a given jurisdiction.
The nine indicators for MSM include five biological, one service, and thtrr sociopolitical measures and are summarized in Table 5. Although using the total MSM population as the denominator for MSM prevalence and incidence indicators is a departure from the calculations traditionally used (i.e. in which the total male population is used as a denominator), the consultants and the participants in the field sites strongly endorsed the use of a subepidemic-specific denominator to take advantage of research efforts to estimate the MSM population [28–30]. Because of lack of information or capacity to calculate new denominators, some states may continue to use the traditional calculations. Incidence and prevalence rates that use an MSM denominator, however, are more likely meet the needs of the MSM community by providing a better estimate of infection among MSM. (For a more in-depth discussion on calculating denominators for prevention indicators among MSM, see http://www.cdc.gov/hiv/eval.htm)
The rectal gonorrhea incidence indicator is a sensitive indicator of unprotected anal sex among MSM . Unfortunately, little is known about the incidence of rectal gonorrhea because of historically inconsistent reporting of anatomical site of infection.
The one service indicator for MSM measures the availability of condoms in prisons For example, it is estimated that 11% of the inmates in the 73 prisons in New York State are infected with HIV, a proportion that has led to an increase in advocating condom availability (i.e. in addition to condoms for conjugal visits) . Finally, societal homophobia is measured by the presence or absence of discrimination laws. The indicator for same-sex domestic partnership benefits refers to whether state laws allow such benefits.
Six indicators were specified for childbearing women (Table 6). For two indicators, experts suggested one of two measures, depending on data availability, because the SCBW is no longer conducted in most states. For each indicator, the preferred measure is listed first. These indicators are consistent with the call for increased perinatal HIV prevention efforts in the Institute of Medicine report .
Several lessons were learned from the development and field testing of the indicators that may be of interest to others who plan to use similar indicators.
The dynamic nature of indicator monitoring: using trend data in an evolving epidemic
The use of indicators is predicated on the assumption that one can accurately monitor and interpret changes in the indicators over time. To do this, data sources, measures, and data collection methods need to remain reasonably constant . Several changes were experienced during the project period. First, the indicators’ numerators and denominators were refined and specified as new information dictated after each step in development. Here the final version of the indicators derived from this process is given. However, as surveillance systems are updated and other data become available, the need for adjustments will continue. These changes must be balanced with the goal of monitoring trends over time to inform state and local program planning.
Second, the changes that took place in the core indicator data sources during the short pilot period were not anticipated. For example, because of funding reductions fewer states (from 25 to 10) used the Behavioral Risk Factor Survey sexual behavior module which contained the behavioral data for high risk heterosexuals. However, these data came closest to meeting the core definition and provided the behavioral risk information needed; omitting the indicators from the core would have resulted in an even more serious gap in behavioral data. Also, a decision to retain them was made because funding streams may change again.
Third, the interpretability of AIDS surveillance data began to decline as the effects of anti-retroviral therapy began to be reflected in national surveillance data. The resulting reduction in the number of AIDS cases is profoundly affecting CDC's surveillance activities. Efforts are being made to enhance AIDS surveillance by the year 2003 so that it can be linked to services and service needs and be more useful for monitoring treatment access and failure. Additionally, as AIDS indicators become less significant for prevention, HIV and behavioral surveillance become more important as core indicators, especially as an increasing number of states implement HIV reporting. Currently, CDC, the Council of State and Territorial Epidemiologists, and the National Alliance of State and Territorial AIDS Directors, among others, recommend that all states implement HIV reporting .
Determine the usefulness of the core indicators as a set
As expected, the use of the core indicators as a set differed by site, because the subepidemics differed by site. Currently, the indicators are viewed as a set, with CDC's provision that states may not report indicators that are not relevant to local subepidemics or subepidemics for which data sources are not available. Omitting indicators should be done cautiously however, because subepidemics can emerge, and the prevention indicator can be a prompt for scanning the environment for early warning signs in the surveillance systems from which they are derived. Additionally, the usefulness of the indicators in describing the overall status of prevention efforts in a jurisdiction is enhanced by triangulating the indicators.
Recognize and respond to the limitations of the indicators and gaps in data sources
Several constraints were faced in developing the core indicators. First, the indicators had to be derived from existing data sources. Second, the data sources had to be available in most states. Third, there were, and still are, major gaps in the behavioral data sources in the USA and thus in the behavioral indicators. The most troublesome gap is the lack of standardized information about condom use. Because the correct and consistent use of condoms is a fundamental prevention strategy, the lack of data on this behavior severely limits the ability to monitor the effects of prevention programs.
States and localities in the USA still do not have standardized, routine HIV-related behavioral surveillance. There are many one-time local behavioral surveys that can sometimes be linked but such surveys have limited usefulness for monitoring trends in risk behaviors. Several of the national population-based surveys, although useful to CDC in creating a national picture, cannot produce state or local data and thus are of limited use in local decision-making . A few indicators can be drawn from the Youth Risk Behavior Surveillance System and Behavioral Risk Factor Surveillance System; however, the data do not necessarily reflect at-risk populations . Filling behavioral data gaps would improve the usefulness of the indicators and is likely to be most feasible and effective if conducted by state and local health departments in collaboration with local behavioral experts and community-based organizations .
In 1997, at the National Institutes of Health National Consensus Conference on HIV Prevention, CDC was asked to conduct national HIV-related behavioral surveillance . Even before that time, CDC had been exploring the complex methodological and practical issues related to behavioral surveillance and had been developing standardized measures, including questions about condom use, sexual behavior, and drug use. These measures are now being field tested. (Information about these activities are available at http://www.cdc.gov/nchstp/od/core_workgroup)
Supplemental indicators aid in interpreting the core indicators
From the beginning, it was clear that sites would need locally relevant information to help interpret the core indicators. It was also clear that such data need not be available everywhere so a second category – supplemental indicators – was developed. As site-specific indicators, they proved useful in interpreting the core indicators, measuring dimensions of the local epidemic, and informing local HIV prevention planning. Examples of supplemental indicators are reported condom use among MSM or injecting drug use with sterile syringes among IDU [31,38,39]. CDC looks to state and local health departments for additional networking and sharing of useful supplemental indicators.
Monitor indicators as a part of a comprehensive monitoring and evaluation strategy
The usefulness of the indicators is increased if implemented in concert with other programmatic indicators, surveillance, and evaluation activities [17,36]. At several sites, indicators were used to identify needs and direct further efforts to evaluate specific prevention program activities. Staff at one site are working to show how programmatic indicators, intervention outcome evaluations, and the HIV prevention indicators can be linked and used in HIV prevention planning .
At CDC these indicators are helping to inform new activities, for example: (i) the development of a rapid assessment manual for gauging social and behavioral changes in the community, a manual that will help state and local health departments fill the gap in behavioral data; (ii) a new study that will examine the HIV prevention decision-making process and identify data-based approaches to meet decision-making needs; (iii) efforts to develop new community-level indicators; and (iv) CDC's new behavioral surveillance activities. Additionally, the new CDC Suggested Guidelines for Developing an Epidemiologic Profile will recommend inclusion of the HIV prevention indicators as a quick reference guide in the epidemic profiles prepared by each state annually in their HIV prevention community planning process.
In our review of HIV/AIDS indicators developed by other performance measurement activities, including the Healthy People Year 2000 and 2010 objectives, the measures developed by CDC to respond to the Government Performance Results Act (GPRA)  and the HIV/AIDS measures developed by the Health Resources and Services Administration to respond to GPRA show similarities to the HIV prevention indicators described here. However, because the HIV prevention indicators were developed more completely and systematically, the indicators are better specified and more comprehensive as a set (a matrix comparing the indicators is available at http://www.cdc.gov/hiv/eval.htm).
In conclusion, one of the major successes of this project from the perspective of AIDS directors, is that ‘the use of the CDC HIV prevention indicators has sparked the regular and proactive review of our monitoring data, facilitating more comprehensive and effective use of data in our HIV prevention community planning process. It also has made integrating biological, behavioral, programmatic, and social data easier and readily available when needed’ (Jill DeBoer, AIDS Director, State of Minnesota, oral communication, 1999). As the indicators are implemented in more places and the experiences of the pilot sites repeated, a significant step in determining the effects of our collective efforts in HIV prevention will have been made.
The authors thank S. Dooley and J. Buehler, PERB Chief during the conceptual stages of the study for their important contributions.
1. Mayne J, Zapico-Goni E (eds). Monitoring Performance in the Public Sector.
New Brunswick, NJ: Transaction; 1997.
2. Rieper O, Toulemonde J (eds). Politics and Practices of Intergovernmental Evaluation.
New Brunswick, NJ: Transaction; 1997.
3. Vedung E. Public Policy and Program Evaluation.
New Brunswick, NJ: Transaction; 1997.
4. Valdiserri RO. Preventing AIDS: The Design of Effective Programs.
New Brunswick, NJ: Rutgers University Press; 1989.
5. Coyle S, Boruch R, Turner C. Evaluating AIDS Prevention Programs.
Washington, DC: National Academy Press; 1991.
6. Holtgrave DR, Harrison J, Gerber RA, Aultman TV, Scarlett M. Methodological issues in evaluating HIV prevention community planning.
Public Health Rep 1996, 111 (suppl 1): 108 –114.
7. Mertens T, Carael M. Evaluation of HIV/STD prevention, care and support: an update on WHO's approach.
AIDS Educ Prev 1997, 9: 133 –145.
8. Rugg D, Buehler J, Renaud M, Gilliam A. et al. Evaluating HIV prevention: a framework for national, state, and local levels.
Am J Eval 1999, 20: 35 –56.
9. Holtgrave DR, Qualls NL, Graham JD. Economic evaluation of HIV prevention programs.
Annu Rev Public Health 1996, 17: 467 –488.
10. Brandeis University. Institute for Health Policy and Boston University, Join Together Program. A Community Substance Abuse Indicators Handbook.
Boston, MA: University Press; 1997.
11. Perrin E, Koshel J (eds). IOM Report: Assessment of Performance Measures for Public Health, Substance Abuse, and Mental Health.
Washington, DC: National Academy Press; 1997.
12. Gruenewald PJ, Treno AJ, Taff G, Klitzner M. easuring Community Indicators.
Thousand Oaks, CA: Sage; 1997.
13. Mertens T, Carael M, Sato P, Cleland J, Ward H, Smith GD. Prevention indicators for evaluating the progress of national AIDS programmes.
AIDS 1994, 8: 1359 –1369.
14. Innes JE. Knowledge and Public Policy: The Search for Meaningful Indicators.
New Brunswick, NJ: Transaction; 1990.
15. Centers for Disease Control and Prevention. Supplemental Guidance on HIV Prevention Community Planning for Non-competing Continuation of Cooperative Agreements for HIV Prevention Projects.
Atlanta, GA: CDC; 1993. (Available from the National Public Information Network, 1 800 458 5231).
16. Holtgrave DR, Valdiserri RO. Year one of HIV prevention community planning: a national perspective on accomplishments, challenges, and future directions.
J Public Health Manage Practice 1996, 2: 1 –9.
17. United Nations Joint Programme on AIDS. Monitoring HIV Prevention, AIDS Care and STD Control Programs: Guide and Indicators.
Geneva: UNAIDS; May 1999.
19. Rossi PH, Freeman HE. Evaluation: A Systemic Approach.
Newbury Park, CA: Sage; 1993.
20. World Health Organization. Global AIDS Strategy.
Geneva: WHO; 1992. WHO AIDS Series No. 11.
21. Centers for Disease Control and Prevention. HIV Prevention Indicators Literature Review – Revised Version.
Prepared by Macro International, Inc. Atlanta, GA: CDC; 1997.
22. Centers for Disease Control and Prevention. Compendium of HIV Prevention Interventions with Evidence of Effectiveness.
Atlanta, GA: CDC; 1999.
23. Sogolow E, Kay L, Semaan S, et al. Development of an HIV intervention studies database for providers and researchers.
Presented at XII International Conference on AIDS.
Geneva, July 1998.
24. Institute of Medicine. Preventing Perinatal HIV Transmission.
Washington, DC: National Academy Press; 1998.
25. Groseclose SL, Weinstein B, Jones TS, Valleroy LA, Fehrs LJ, Kassler WJ. Impact of increased legal access to needles and syringes on practices of injecting drug users and police officers – Connecticut, 1992–1993.
J Acquire Immune Defic Syndr Hum Retrovirol 1995, 10: 82 –89.
26. Valleroy LA, MacKellar DA, Karon JM, Janssen RS, Hayman CR. HIV infection in disadvantaged out-of-school youth: prevalence for US Job Corps entrants, 1990 through 1996.
J Acquire Immune Defic Syndr Hum Retrovirol 1998, 19: 67 –73.
27. Institute of Medicine. Preventing HIV Transmission: The Role of Sterile Needles and Bleach.
Washington, DC: National Academy Press; 1995.
28. Binson D, Michaels S, Stall R, Coates TJ, Gagnon JH, Catania JA. Prevalence and social distribution of men who have sex with men: United States and its urban centers.
J Sex Res 1995, 32: 245 –254.
29. Page-Shafer K, McFarland W, Katz M. 1997 HIV Prevalence and Incidence Consensus Report.
San Francisco: San Francisco Department of Public Health; 1997.
31. Centers for Disease Control and Prevention. Rectal gonorrhea and risk behavior in men who have sex with men in San Francisco.
MMWR 1999, 48: 45 –48.
33. Stoto M, Almario D, McCormick M (eds). IOM Report: Reducing the Odds, Preventing Perinatal Transmission in the United States.
Washington, DC: National Academy Press; 1999.
34. Council of State and Territorial Epidemiologists. CSTE: Position Statement ID-4. National HIV Surveillance: Addition to the National Public Health Surveillance System.
Atlanta: CSRE; 1997.
35. Anderson JE, Wilson RW, Barker P, Doll L, Jones TS, Holtgrave D. Prevalence of sexual and drug-related HIV risk behaviors in the US adult population: results of the 1996 National Household Survey on Drug Abuse.
J Acquire Immune Defic Syndr Hum Retrovirol 1999, 21: 148 –156.
36. United Nations Joint Programme on AIDS and Family Health International's IMPACT Project. Meeting the Behavioral Data Collection Needs of National HIV/AIDS Control Programmes.
Geneva: UNAIDS; 1998.
37. National Institutes of Health. Interventions to prevent HIV risk behaviors.
NIH Consens Statement 1997, 15: 1 –41.
38. Cranston K, Lopez-Gomez AM, Amaro H, Cabral H. Monitoring our effectiveness: The CDC HIV prevention indicators project in Massachusetts.
Presented at National HIV Prevention Conference
. Atlanta, August–September 1999.
39. Page-Shafer K, McFarland W, Kim A, et al. Prevention indicators for evaluating the progress of HIV prevention in San Francisco, 1994–1997.
Presented at National HIV Prevention Conference
. Atlanta, August–September 1999.
40. Oldenburg N, Carr P, Wilkinson, L, Rugg D. CDC HIV prevention indicators: How the indicators are used to characterize the HIV epidemic in a low-incidence state, Minnesota, 1993–1997.
Presented at Presented at National HIV Prevention Conference
. Atlanta, August–September 1999.
41. Centers for Disease Control and Prevention. Final FY 1999 Performance Plan and FY 2000 Performance Plan Executive Summary.
Atlanta, GA: CDC; 1999.
In addition to the authors, the HPI Field Collaborative includes: Boston University, School of Public Health, H. Amaro, H. Cabral; Centers for Disease Control and Prevention, National Center for HIV, STD, and TB Prevention, Division of HIV/AIDS Prevention-Intervention Research and Support, C. Lyles (Behavioral Intervention Research Branch), T. Akers, L. Wilkinson (Program Evaluation Research Branch); Louisiana Medical School, R. Scribner; Louisiana Office of Public Health, T. Farley, S. Posner, D. Wendell; Massachusetts Department of Public Health, K. Cranston, J. McGuire; Minnesota Department of Health, P. Carr, J. DeBoer; San Francisco Department of Public Health, M. Katz, A. Kim, W. McFarland, P. Norton; Texas Department of Health, S. King, J. Koch; University of Minnesota, Medical School, B. R. S. Rosser; University of Texas, Southwestern Medical Center, K. Batchelor, C. Hearn, M. Kazda.
HIV prevention; monitoring; evaluation; indicators
© 2000 Lippincott Williams & Wilkins, Inc.
What does "Remember me" mean?
By checking this box, you'll stay logged in until you logout. You'll get easier access to your articles, collections,
media, and all your other content, even if you close your browser or shut down your
To protect your most sensitive data and activities (like changing your password),
we'll ask you to re-enter your password when you access these services.
What if I'm on a computer that I share with others?
If you're using a public computer or you share this computer with others, we recommend
that you uncheck the "Remember me" box.
Highlight selected keywords in the article text.
Data is temporarily unavailable. Please try again soon.
Readers Of this Article Also Read