Secondary Logo

Journal Logo

Clinical Science

Efficacy of Antiretroviral Therapy Adherence Interventions

A Research Synthesis of Trials, 1996 to 2004

Rivet Amico, K. PhD*; Harman, Jennifer J. PhD*†; Johnson, Blair T. PhD

Author Information
JAIDS Journal of Acquired Immune Deficiency Syndromes: March 2006 - Volume 41 - Issue 3 - p 285-297
doi: 10.1097/01.qai.0000197870.99196.ea
  • Free


Medication-taking behavior and adherence to prescribed medication regimens have long been of interest to scientists and practitioners, but the study of adherence specific to antiretroviral therapy (ART) has a comparatively young history. On its introduction in 1996, ART offered HIV-positive individuals tremendous life-sustaining benefits1-4 and is most likely the single most dramatic development yet in the treatment of HIV. When taken consistently and strictly as directed, the benefits of ART include increased CD4 cell counts, decreased viral load, and decreased probability of progression to full-blown AIDS and death.5-8 Depending on the specific regimen,9 patient failure to follow the ART regimen can lead to the development of treatment-resistant strains of the virus10,11 and poorer health outcomes. Given that medication-taking behavior can so profoundly affect an individual's response to treatment and potentially narrow options for future treatment, ART adherence is now widely recognized as a critical health promotion behavior for HIV-positive individuals on therapy.

During routine clinical visits, most clinicians counsel individuals on ART to take their medications consistently.12 The demanding nature of ART regimens underscores the need for more strategic and multifaceted interventions that extend beyond the typical patient-provider interaction or ad hoc clinic discussions. Over the past decade, a number of adherence interventions have been evaluated and have met with varying degrees of success. Reviews of ART adherence interventions through 1999 (see Fogarty et al13 and Haddad et al14) concluded that interventions have primarily been atheoretic and have lacked sufficient methodologic rigor to assess impact and generalize results. For example, Haddad et al14 located only 1 study that met all 3 of their methodologic inclusion criteria: delivery of a supportive or educational intervention, use of a comparison group, and at least1 measure of adherence. Fogarty et al's13 review of 16 ART adherence intervention studies concluded that the effects of interventions were weak, generally small, and underpowered (with an average sample size of only 57.6 participants) and included only 4 studies published in peer-reviewed journals. Ickovics and Meade15 reviewed ART adherence intervention outcome studies published or presented at conferences through 2001. Echoing earlier reviews, Ickovics and Meade15 called for improvements in methodology. Their description of several controlled outcome studies using comprehensive and individualized interventions, however, seemed more promising than earlier reviews, noting that such interventions had the potential to improve and sustain ART adherence over time. Similarly, the review by Cote and Godin's16 noted the preliminary success of pilot and feasibility studies but also concluded that, overall, the literature lacks more organized and methodologically rigorous trials. Simoni et al17 reviewed ART adherence interventions published through January 2003. Their extensive review identified and described 21 intervention outcome studies. Overall, studies were still described as weak and underpowered; the average sample size was only 66. Simoni et al17 did not calculate the average effect of the interventions on ART adherence, most likely because of the fact that many of the studies reported insufficient outcome information (eg, these investigators noted that only 36% of the studies reported information on adherence indicators). Thus, although the number of intervention outcome studies targeting the enhancement or maintenance of ART adherence is increasing in the published literature, there are continued problems with methodology and standards for reporting outcome measures. In addition, although these reviews have provided exceptional detail in terms of intervention characteristics and reported outcomes, none have used quantitative techniques to evaluate the literature.

With growing numbers of ART intervention studies under development or in early stages of implementation (see Cote and Godin16), there is a pressing need not only to describe and organize past research systematically but to assess the impact of interventions that have been conducted to date quantitatively. The current research synthesis standardizes and evaluates intervention effects in the published literature from the earliest study, published in 1996, to December 2004. One outcome is the overall average effect size (ES) for adherence interventions, which, as we noted, has not yet been estimated. In addition, to provide as reliable an estimation of ES as possible, we evaluated the potential impact of certain study characteristics and methodologic features on ES. Additionally, identifying overall intervention effects can provide the critical parameter estimates needed for a priori power analyses and sample size determination for adherence intervention outcome studies in development or in early stages of implementation.


Sample of Studies

Potential ART adherence intervention studies were identified through a variety of strategies. Searches were conducted on MedLine and PsycINFO databases for the years 1996 (the year ART was introduced) through December 2004. Combinations of search terms such as highly active antiretroviral therapy (HAART), ART, adherence, medication adherence, interventions, antiretroviral therapy, and nonadherence were used to locate this literature. Additional publications were located by reviewing reference sections of articles as well as by contacting researchers and authors directly for in-press articles or manuscripts. The criteria for inclusion of a study in this review were as follows: The study must have (1) an intervention that targeted the enhancement or maintenance of ART adherence as a primary outcome, (2) compared the adherence intervention against a control or pretreatment baseline, (3) been published in a peer-reviewed journal, and (4) provided sufficient information to calculate an ES (eg, sample size and pre- and postvalues for adherence).

As demonstrated in Figure 1, 828 articles, book chapters, and dissertations contained the relevant search terms, and all but 55 were initially excluded because of failure to meet inclusion criteria (eg, not containing an intervention). Of the remaining studies, an additional 31 were excluded because they provided insufficient statistics about their measures of adherence behavior (k = 11), failed to provide an estimate of baseline adherence before initiation of the use of medication events monitoring system (MEMS) medication caps (k = 5), or failed to provide comparison group or baseline adherence information (k = 15). An original pool of 17 studies that failed to provide sufficient statistical information required for the computation of an ES was reduced to 15 by contacting authors and receiving additional information. Thus, in total, 24 studies provided all necessary information and fully met the inclusion criteria. These 24 studies produced 25 separate postintervention effects on ART adherence and an additional 13 follow-up assessments.

Selection process for study inclusion in the meta-analysis.

Coding of Interventions

We classified each intervention on several dimensions: (1) design type (between vs. within subjects), (2) sample size, (3) use of random assignment (yes vs. no), (4) number of weeks between onset of the intervention and assessment of adherence, (5) intensity of the intervention, (6) theoretic basis of the intervention, (7) whether participants in the study were selected on the basis of known or anticipated preexisting problems with ART adherence, and (8) total duration of the intervention. Interventions were categorized qualitatively into 1 of 5 levels of intervention intensity by 2 trained raters. Specifically, low-level adherence interventions provided ad hoc conversations with clinic or health care staff or some other form of service that was a slight extension of standard of care.12 Low-medium-level interventions additionally seemed to have trained staff available for support and may have provided memory aids, such as beepers or pill boxes. Medium-level interventions had all the features of the previous level but additionally seemed to take a multidisciplinary approach and offered more intensive services (eg, general support groups). Medium-high-level interventions offered even greater intensity in services, such as adherence-specific support groups or visiting nurse programs, and noted a theoretic basis for the interventions. Finally, high-level interventions had all the features of the previous levels and also provided strategies and skills appropriate for all levels of adherence, from initiation to maintenance. Studies were coded as having a theoretic basis only when a specific theory or theoretic rationale for an intervention was described anywhere in the text of the report of the study. Studies that reported only recruiting patients who pretested as poor adherers or who were referred for participation by their health care provider because of suspected poor adherence were classified as targeting those with known or suspected adherence problems. Agreement between coders for intervention intensity coding was high (r = 0.920), and discrepancies were resolved via discussion. For the independent coding of all other study characteristics and dimensions as well as for calculation of ES for each assessment of outcome, these same 2 raters had 100% agreement.

Effect Size Derivation

We calculated individual ESs for the degree of adherence observed in each of the separate interventions. Studies nearly always defined outcomes in continuous rather than dichotomous terms; thus, the ES calculated was the standardized mean difference (d) rather than the odds ratio (OR), which gauges the relation of a categoric variable to a categoric outcome. For each intervention, d was calculated as the difference in mean ART adherence between the treatment and a comparison, divided by the standard deviation (SD); each d was corrected for the bias that results from small samples.18,19 When available, the ES was calculated based on a treatment versus control group (between-subjects design); otherwise, baseline levels of adherence for the treatment group (within-subjects design) served as the comparison standard, as Table 1 indicates.

Description of ART Adherence Interventions and Their ESs

The pooled SD served as the denominator in the ES calculation in between-subjects designs, and the SD of the paired comparisons served as the denominator in within-subjects designs. Most studies provided only proportional differences in overall adherence outcome levels between treatment and control groups, but several provided mean adherence levels and included SDs. We gave ESs a positive sign if the intervention improved ART adherence relative to the standard and a negative sign if adherence declined.18,20 ESs were calculated on the measures provided at the first follow-up after the intervention, a procedure that reduced methodologic variance across the compared studies. When a study offered 2 or more postintervention assessments of adherence, the first was used for primary analyses, and the second was also used in an exploratory analysis to evaluate how ESs might decay over time. When more than 1 assessment of ART adherence was provided for any given outcome, we averaged available ESs across measures.

Analyses of Effect Sizes

Analyses to examine weighted mean ESs across the sample were performed with fixed-effects and random-effects assumptions; analyses to examine whether features of the studies explained variability in the ESs followed fixed-effects assumptions.18,21 These models provide weighted mean ESs and effect modification by study characteristics. For cases in which study features significantly explained ES variability but were incorrectly specified, models incorporated random effects assumptions to assess whether the patterns identified in the fixed-effects model remained viable. Moreover, for the purpose of illustration, Table 1 also provides the equivalent OR22 for each ES or weighted mean ES [OR = exponential (ES × 1.81)]. Such an OR indicates the proportion of study participants in the intervention conditions whose medication adherence improved versus those in the control conditions. Finally, we examined the post hoc power of ESs using GPOWER software23 based on study ES estimates and reported sample size. Power has been traditionally understood as the sensitivity of an experiment, or it is the conditional probability of rejecting the null hypothesis (in this case, the hypothesis that the intervention, in reality, had no effect on adherence) when it is in fact false, essentially avoiding a type II error.24 Post hoc power or observed power is a function of the observed ES and its P value (see Lenth25 and Levine and Ensom26), and thus should not be used to draw conclusions regarding studies being "over-" or "underpowered."


Description of Studies

Table 1 shows the general sample and methodologic characteristics of the 24 separate intervention outcome studies, which produced a total of 25 postintervention ART adherence outcomes. Thirteen (52%) used a randomized comparison group, and 2 (8%) used a nonrandomized comparison group, whereas 10 (40%) of the interventions used a within-group design. Fourteen (56%) of the studies reported single follow-up assessments, and 11 (44%) reported more than 1 follow-up assessment; as noted, analyses focused on the first available follow-up assessments.

In terms of data reported in each of the reviewed studies, none of the studies included reports of adverse events, only 6 (25%) reported the proportion of patients on certain antiretroviral (ARV) medications (eg, protease inhibitors [PIs], nonnucleoside reverse transcriptase inhibitors [NNRTIs]), 7 (29%) made specific reference to their sample's length of time on ARVs, and 12 (50%) reported the number of pills characteristic of their sample's ARV regimen. As indicated in Table 1, the primary outcome measure was typically self-reported adherence, although 4 studies (17%) used MEMS caps and 17 studies (71%) descriptively or statistically supplemented their outcomes with indices of HIV viral load and/or CD4 cell counts.

The most frequently reported intervention features were reminder systems and some degree of counseling support, although the intensity and provider of that support varied. Nine (36%) of the 25 intervention studies reviewed explicitly stated that they used some form of reminder device or strategy (see McPherson-Baker et al27, Powell-Cope et al28, and Rigsby et al29), such as electronic reminders, pillboxes, stickers, and telephone reminders. Similarly, 18 (72%) of the 25 studies reviewed provided some level of counseling support by providers (see Margolin et al30) or specialized support staff (see Tuldra et al31) or some type of feedback regarding disease progression (see Haubrich et al32), whereas other interventions involved enlisting several sources of support (see McPherson-Baker et al27). Three studies included directly observed therapy in a prison setting or by visiting nurses as part of their intervention (see Kirkland et al33). Of the 21 studies that reported the duration of intervention exposure, the average length of intervention was 20 weeks, ranging from 4 to 48 weeks.

Effects of Interventions on Adherence

ES calculations for each study are shown graphically in Figure 2. On average, treatment participants significantly improved adherence relative to the comparison; the confidence intervals for the fixed-effects and random-effects weighted means did not include 0 (fixed effects: M d = 0.25, 95% confidence interval [CI]: 0.18 to 0.33; random effects: M d = 0.35, 95% CI: 0.20 to 0.51). Yet, adherence success levels varied considerably across the studies (range: −1.19 to 1.45) as gauged by the significant homogeneity value [Q(25) = 72.49; P < 0.001]. Thus, the interventions were not equally effective. Post hoc power for the ESs ranged from 0.05 to 0.99 (M = 0.50, SD = 0.33).

Forest plot of ESs for adherence interventions. ESs are sized proportional to the inverse of their variance; positive values imply greater adherence in the intervention relative to the control group. Those in green indicate statistically significant improvements and that in blue indicates a statistically significant reversal. The confidence value of the mean ES is given by the width of its diamond symbol.

We proceeded to examine whether study characteristics could account for this variability. As Table 2 indicates, demographic variables, such as gender composition of study participants, did not account for significant variability in ESs. Similarly, ES variability had no statistically significant relation to whether adherence change was examined in comparison to a control or relative to baseline. Features of the interventions were also assessed relative to the effects of the interventions, specifically in terms of whether or not the study intervention had an articulated theoretic basis and the coded intensity of the intervention. Neither the articulation of a theoretic basis for the intervention evaluated nor the level of intensity of the intervention related to ES. We explored potential effects of length of exposure to an intervention and measurement strategy on outcomes. Intervention duration did not significantly relate to adherence outcomes. Similarly, we did not find a significant difference in adherence outcomes between studies using MEMS to assess for adherence outcomes compared with those using self-report. Thus, neither study design, sample demographics, clear articulation of a theoretic basis, level of intervention intensity, duration of intervention exposure, nor measurement strategy related to ESs.

Tests of Effect Modification by Study and Sample Characteristics

We next assessed the extent to which intervention effects differed on the basis of the preintervention or baseline adherence of study participants. Specifically, we evaluated the effects of interventions that targeted patients with known or anticipated adherence problems through recruitment strategies in comparison to studies that did not select participants on the basis of adherence problems at baseline. As Table 2 indicates, ESs significantly differed between these types of studies (QR = 15.75; P < 0.001). Studies targeting those with known or anticipated adherence problems exhibited significantly larger intervention effects (d = 0.62) than those studies that did not target poor adherers (d = 0.19). Thus, on average, interventions that recruited participants with known or anticipated adherence problems demonstrated, according to conventions (see Cohen24) a medium effect on adherence, whereas those with open recruitment or enrollment demonstrated a small ES. Indeed, within the studies that targeted poor adherers, the assumption of homogeneity was not violated [QW(9) = 15.01; P = 0.06] but was significant for those studies that did not target patients with known or anticipated problems with adherence at baseline [QW(15) = 41.73; P < 0.001]. Thus, the medium average ES seemed to be an adequate description of intervention effects when studies sought to intervene with individuals who had known or anticipated problems with adherence. The small average ES for studies that did not specifically enroll or target those with baseline adherence problems may not be a reliable or accurate point descriptor, because there was still significant variability in ESs among these studies. Nonetheless, a model that evaluated this study feature and that incorporated random-effect assumptions again concluded that baseline adherence problems were significantly linked to adherence ESs. Moreover, this model was correctly identified [Q(22) = 31.92; P = 0.08].

Finally, we assessed whether the effects of interventions on ART adherence changed with time. As Table 2 shows, there was no significant tendency for intervention efficacy to change with time, where the number of weeks between onset of intervention and assessment of adherence varied between 0 and 26. When all available ESs (see Table 1) were included, with the number of weeks ranging between 0 and 48, there was still no pattern for intervention effects to decay (β = −0.03; P = 0.81).


ART adherence interventions have the potential to provide support for HIV-positive individuals on this therapy. To the extent that such interventions are effective, they can help individuals to achieve and maintain levels of adherence that maximize the health benefits of ART and minimize the opportunity for the development of multidrug resistance (MDR).10,11 At the level of the HIV epidemic, effective ART adherence interventions may in fact contribute to reducing the likelihood of transmission of MDR virus by way of reducing the number of individuals who develop it. The impact of ART adherence interventions on individual and public health is necessarily limited by the efficacy of such interventions to enhance suboptimal adherence and maintain optimal adherence.

The current study sought to establish the extent to which such interventions have been efficacious to date and which features of interventions and intervention studies might be associated with improvements in adherence. We reviewed ART adherence intervention outcome studies published in peer-reviewed journals from 1996 through December 2004; 24 studies met our criteria for inclusion and comprised samples ranging from 6 to 435 patients. Typically, studies attempted to improve ART adherence to "optimal" levels (eg, 90%-95% adherence). This sample of studies reported 25 assessments of adherence behavior after intervention and an additional 13 assessments of more extended follow-up assessments, each of which was converted to the standardized mean ES. Meta-analyses of ESs revealed that, overall, interventions demonstrated an average ES of approximately 0.35, which is conventionally of "small" magnitude,24 yet there was substantial variability across these ESs. Several factors did not account for any variability in efficacy, including gender of sample, study design (within-group vs. between-group designs), advocacy of theory in constructing the intervention, level of intensity of the intervention, length of exposure to an intervention, and measurement strategy. Yet, when participants with poor or suspected poor adherence were recruited, intervention ESs were significantly larger than for samples that did not select for this variable (ES of 0.62 vs. 0.19, respectively).

Intervention Efficacy and Study or Sample Features

Gender and other kinds of demographic variables have not traditionally been associated with adherence (see the articles by Avants et al34 and Catz et al35); thus, it is not surprising that gender exerted no apparent influence on the overall efficacy of the interventions. Similarly, equivalence ofESs for studies with between-group or within-group designs is reasonable, because differences between interventions using more or less rigorous designs may rest more on issues surrounding internal validity than on ES per se. It was, however, somewhat surprising that theoretic bases and level of intensity of the interventions were not associated with ESs.

We coded interventions on the basis of the articulated theoretic model used in intervention design and development. Whereas some studies clearly stated that they were theoretically based, many failed to provide specific details about their interventions. It is likely that most if not all of the sample had a particular theoretic rationale for the design and implementation of their intervention. Specific theoretic articulation may have been more a function of presentational style or limits in the size or scope of presentation. Also, the "cafeteria-style" approach that characterizes many adherence interventions36 may make it particularly difficult to distill theunderlying theories of interventions or their relative contributions to intervention effects. Nonetheless, the clear articulation of the theoretic basis of an intervention, regardless of the intervention's effect on adherence, is essential to organize the literature coherently.

The coding of intervention intensity levels, which was almost uniformly medium or higher in the current sample ofstudies, posed similar difficulties. Our team coded intensityonthe basis of the articulated strategies used in an intervention. Despite excellent reliability, it is difficult to determine if a lower level of intensity intervention was in reality low or was low simply because of selective or restricted presentation of features of the intervention. We suspect that authors were generally brief in reporting intervention features and specific strategies in their presentation of outcome studies, which may have lowered the validity of this coded variable. It is unfortunate that most study reports failed to provide sufficient information about these critical aspects of their interventions. These details are critically important to the organization and strategic progression of the literature.

In terms of measurement strategy, whereas the current sample's intervention effects on adherence did not seem to depend on the manner in which adherence was assessed (self-report vs. MEMS cap use), it is likely that the current sample did not have a sufficient number of studies using MEMS to assess baseline and postadherence to produce a reliable comparison to the large number of studies using self-report. Previous research has found that self-reported adherence tends to be higher than comparative levels of adherence estimated from MEMS data,37,38 although there is support for both measurement strategies in terms of their correlations with CD4 cell counts, viral load indices, and blood concentrations ofARV medications.11,31,32,39-43 A convention of using multiplemeasures would be a valuable strategy,44 but the inclusionofaMEMS type assessment strategy, specifically, may not berealistic for studies and interventions that have limitedresources. The costs associated with MEMS, in terms offinances and participant burden, make it an inaccessible strategy for some populations. As such, a reasonable combination of assessment strategies might include, at minimum, self-report- and/or MEMS-generated levels of adherence and at least 1 measure of biologic outcome. Only with multiple measures can changes in adherence be attributed fully to actual changes in behavior and not to changes in self-report alone.

Similarly, studies did not consistently report certain information about the ART regimens for patients in their samples that may have influenced rates of adherence (eg, pill burden, doses per day, or the experience of acute adverse medication effects) and categorization of what would be considered optimal adherence (eg, type of ART medications prescribed). There is a growing recognition that the relation between adherence and health outcomes is complex and at times nonlinear,9 making the failure to describe a sample's ART regimen characteristics fully increasingly problematic. As a general standard, the types of ART medications prescribed, length of time on ART therapy, and number of pills and doses in a regimen should be provided in all treatment outcome studies.

Although the studies reviewed provided insufficient information to assess for the potential impact of ART regimen characteristics on adherence intervention outcomes, we were able to assess the impact of certain enrollment strategies. Studies that specifically targeted patients with known or anticipated problems with adherence had significantly larger effects on adherence at posttest than those that did not target such patients. We found no evidence that intervention effects decayed with time, suggesting that adherence effects were not an artifact of selecting initially low-adherent patients at baseline. Thus, interventions targeting those with poor adherence seemed to have a strong impact on adherence that held over time. As more studies of intervention outcomes become available with extended periods of follow-up, this tentative conclusion can be assessed further.

In contrast, studies that did not recruit participants with known or anticipated problems with ART adherence had a generally small effect on adherence, but the ESs underlying this mean effect lacked homogeneity. It is likely that other features of the intervention, such as the population studied or the study design itself, may explain these effects. The limited size of the current literature prohibits extensive exploration of such features. As more outcome studies of ART interventions targeting patients across the spectrum of adherence needs and experience with ART become available for review, future research should work toward identifying the underlying factors contributing to variability in effects.

We also evaluated the current studies in terms of their post hoc power to detect an intervention effect (see Table 1). For illustrative purposes, it may be more useful to consider the extent to which the current sample would have been sufficiently powered to detect the intervention effects we identified in our synthesis. Our general estimation of sample size requirements for the recommended 90% power of statistical tests to detect significant population ESs23 suggests that a large proportion of the studies in the current sample were underpowered. Assuming a population effect of 0.62, which is toward the higher end of the meta-analytic results in the current study, only 6 (25%) of the studies in the current sample would have had sample sizes large enough (approximately n = 92) for 90% power. Using the lowest average ES in the current study, d = 0.19, none of the studies would have the sample size to reach 90% power, which is arguably not cost-effective, given the paucity of effect and enormity of the sample (n = 952) required to reach 90% power. Depending on the study design, clinical relevance or significance is likely to be a better metric for desired ES of a given intervention. Thus, using our estimates for ESs, our results are similar to the conclusions of other reviews of the ART adherence intervention literature13,15; studies tend to be underpowered.

There are several limitations in the current study that qualify and provide context to our results and conclusions. Primarily, these results were limited by the fairly small sample of published ART adherence intervention outcome studies and the dependence on rater interpretation for the extrapolation of intervention characteristics. Only 24 separate studies published in peer-reviewed journals met criteria for inclusion. Studies that did not clearly articulate an intervention or report pre- and postintervention assessments of ART adherence were excluded. This criterion eliminated a number of studies that used directly observed therapy or MEMS caps as their only method of measuring adherence and provided no real baseline measurement. Focusing on published work also eliminated a number of potentially promising interventions. Initially, we had in fact included conference presentations and posters in our search, but we later eliminated them because reports provided so little information about the interventions. In addition, we also had some difficulty in ascertaining intervention components from published work; however, as a whole, the group of reports included here were thorough enough for our analyses. This problem, however, brings up the potential limitation of publication bias. Of the 25 separate time-one ES estimates, 9 (36%) were significant (seeFig. 2), which does not suggest publication bias (by conventional expectations, only 5% of the trials would yield a significant result). Nonetheless, the limited sample size and paucity of detailed information about the interventions in many of the included studies are worthy of note.

Another limitation in the general literature is the insufficient reporting of ART regimen and certain sample characteristics. Features of an ART regimen can be complex, sometimes with numerous prescribed medications requiring complicated dosing schedules and dietary restrictions. Although such complexities in ART regimens can make adherence difficult, the adherence literature cautions that the development of simpler ART regimens does not necessarily guarantee increased long-term adherence. A number of studies suggest that decreasing complexity of regimens may strengthen adherence (see Altice et al45); however, adherence rates are actually quite similar across different diseases and medical regimens of markedly differing complexity (see Holzemer etal46 and Horne47). In fact, some studies have found that reducing regimen complexity may not be a pivotal determinant of adherence to ART therapy (see Gao et al48 and Singh et al49). As ART regimens become simpler (eg, involving fewer pills per day), the consequences of missing a single ART dose may actually become more medically severe. As studies provide more detail about the regimen complexity of their samples, the impact of complexity on adherence behaviors can be further investigated.

Similarly, variability in operationalizing adherence continues to characterize the adherence outcome literature36 and makes cross-study comparisons of actual effects on adherence difficult. For instance, there was great variability in how percentages of adherence levels were reported, with some studies reporting the cutoff values at 80% and others at 90% or 95%. Such variability makes the estimation of ceiling effects across studies difficult to estimate.

Recommendations for Future Research

One possible avenue of exploration for future research is the identification of a "continuum" of ART adherence needs. It is currently unclear as to whether or not the needs of the treatment- naive patient are similar to those of patients with an extended ART history. Nor is it necessarily reasonable to assume that the needs of patients "failing" to adhere to ART are similar to those who are working to maintain "optimal" or even prefect adherence. It is arguable that, similar to other areas of health behavior (see Bellg50), there are separate needs and processes that establish a novel health behavior and those that maintain an achieved health behavior. In the case of ART adherence, it is also likely that the needs of a particular individual vary along these dimensions over time. Thus, recognizing these potential differences, interventions that attempt to intervene with groups of patients scattered across a continuum of adherence needs may need to be particularly sensitive to the possibility that a given intervention may be differentially effective for these subgroups. Similarly, the operationalization of outcomes likely differs among subgroups depending on whether success is defined as no change in adherence in adherence (maintenance) or as an increase in adherence from a low pretest level. The clear identification of adherence needs for groups with diverse ART adherence histories and practices, development of interventions with components specifically targeted to meeting those needs, and use of appropriate evaluation strategies are exciting areas for further scientific and practical exploration.

Given the results of the current study, it is reassuring that ART adherence interventions seem to be moderately successful in improving adherence in groups of patients with known or anticipated adherence problems. Interventions targeting individuals that span across a continuum of ART adherence needs (eg, treatment-naive to long-term maintainers) demonstrated low ESs on average but were also quite diverse in their effects. Future research can help to explore the potential reasons for such variability in effects of these interventions by assessing differential intervention needs between groups of patients who are treatment naive, those who are experiencing problems with adherence, and those who are maintaining optimal adherence. Ultimately, interventions that offer a compendium of resources and strategies for patients with diverse and changing ART adherence needs are likely hold the most promise.


Special thanks to Stephanie Macoul, Brian Marini, I-Fen Tu, and Megan O'Grady for their assistance with the qualitative coding in this study.


1. Bartlett JA. Addressing the challenges of adherence. J Acquir Immune Defic Syndr. 2002;29(Suppl):S2-S10.
2. Pradier C, Carrieri P, Bentz L, et al. Impact of short-term adherence on virological and immunological success of HAART: a case study among French HIV-infected IDUs. Int J STD AIDS. 2001;12:324-328.
3. Demasi R, Tolson J, Pham S, et al. Self-reported adherence to HAART and correlation with HIV RNA: initial results with the patient medication adherence questionnaire. Presented at: Sixth Conference on Retroviruses and Opportunistic Infections; 1999; Chicago.
4. Shor-Posner G, Lecusay R, Miguez-Burbano MJ, et al. Quality of life measures in the Miami HIV-1 infected drug abusers cohort: relationship to gender and disease status. J Subst Abuse. 2000;11:395-404.
5. Arnsten J, Demas P, Gourevitch M, et al. Adherence and viral load in HIV-infected drug users: comparison of self-report and medication event monitors (MEMS). Presented at: Seventh Conference on Retroviruses and Opportunistic Infections; 2000; San Francisco.
6. Bangsberg DR, Moss AR, Deeks SG. Paradoxes of adherence and drug resistance to HIV antiretroviral therapy. J Antimicrob Chemother. 2004;5:696-699.
7. Hogg RS, Yip B, Chan K, et al. Nonadherence to triple combination therapy is predictive of AIDS progression and death in HIV-positive men and women. Presented at: Seventh Conference on Retroviruses and Opportunistic Infections; 2000; San Francisco.
8. Manfredi R, Calza L, Chiodo F. Dual nucleoside analogue treatment inthe era of highly active antiretroviral therapy HAART: a single-centrecross-sectional survey. J Antimicrob Chemother. 2001;48: 299-302.
9. Bangsberg DR, Hecht FM, Clague H, et al. Provider estimate and structured patient report of adherence compared with unannounced pill count. Presented at: Seventh Conference on Retroviruses and Opportunistic Infections; 2000; San Francisco.
10. Boden D, Hurley A, Zhang L, et al. HIV-1 drug resistance in newly infected individuals. JAMA. 1999;282:1135-1141.
11. Hecht FM, Colfax G, Swanson M, et al. Adherence and effectiveness of protease inhibitors in clinical practice. Presented at: Fifth Conference on Retroviruses and Opportunistic Infections; 1998; Chicago.
12. Harman JJ, Amico RA, Johnson BT. Standard of care: promoting antiretroviral adherence in clinical care. AIDS Care. 2005;2:237-251.
13. Fogarty L, Roter D, Larson S, et al. Patient adherence to HIV medication regimens: a review of published and abstract reports. Patient Educ Couns. 2002;46:93-108.
14. Haddad M, Inch C, Glazier RH, et al. Patient support and education for promoting adherence to highly active antiretroviral therapy for HIV/AIDS. Cochrane Database Syst Rev. 2000;3:Cd001442.
15. Ickovics JR, Meade CS. Adherence to antiretroviral therapy among patients with HIV: a critical link between behavioral and biomedical sciences. J Acquir Immune Defic Syndr. 2002;31(Suppl):S98-S102.
16. Cote J, Godin G. Efficacy of interventions in improving adherence to antiretroviral therapy. Int J STD AIDS. 2005;16:335-343.
17. Simoni J, Frick P, Pantalone D, et al. Antiretroviral adherence interventions: a review of current literature and ongoing studies. Top HIV Med. 2003;11:185-198.
18. Hedges LV, Olkin I. Statistical Methods for Meta-Analysis. Orlando: Academic Press; 1985.
19. Johnson BT, Eagly AH. Quantitative synthesis of social psychological research. In: Reis HT, Judd CM eds. Handbook of Research Methods in Social and Personality Psychology. New York: Cambridge University Press; 2000:496-528.
20. Johnson BT. DSTAT 1.10: Software for the Meta-Analytic Review of Research Literatures. Hillsdale, NJ: Lawrence Erlbaum Associates; 1993.
21. Lipsey MW, Wilson DB. Practical Meta-Analysis. Thousand Oaks, CA: Sage Publications; 2001.
22. Chinn S. A simple method for converting an odds ratio to effect size for use in meta-analysis. Stat Med. 2000;19:3127-3131.
23. Erdfelder E, Faul F, Buchner A. GPOWER: a general power analysis program. Behav Res Methods Instrum Comput. 1996;28:1-11.
24. Cohen J. Statistical Power Analysis for the Behavioral Sciences.Hillsdale, NJ: Lawrence Erlbaum Associates; 1988.
25. Lenth RV. Some practical guidelines for effective sample size determination. Am Stat. 2001;55:187-193.
26. Levine M, Ensom MH. Post hoc power analysis: an idea whose time has passed? Pharmacotherapy<italic/>. 2001;21:405-409.
27. McPherson-Baker S, Malow RM, Penedo F, et al. Enhancing adherence to combination antiretroviral therapy in non-adherent HIV-positive men. AIDS Care. 2000;12:399-404.
28. Powell-Cope GM, White J, Henkelman EJ, et al. Qualitative and quantitative assessments of HAART adherence of substance-abusing women. AIDS Care. 2003;15:239-249.
29. Rigsby MO, Rosen MI, Beauvais JE, et al. Cue-dose training with monetary reinforcement: pilot study of an antiretroviral adherence intervention. J Gen Intern Med. 2000;15:841-847.
30. Margolin A, Avants SK, Warburton L, et al. A randomized clinical trial of a manual-guided risk reduction intervention for HIV-positive injection users. Health Psychol. 2003;22:223-228.
31. Tuldra A, Fumaz CR, Ferrer MJ, et al. Prospective randomized two-arm controlled study to determine the efficacy of a specific intervention to improve long-term adherence to highly active antiretroviral therapy. J Acquir Immune Defic Syndr. 2000;25:221-228.
32. Haubrich RH, Little SJ, Currier JS, et al. The value of patient-reported adherence to antiretroviral therapy in predicting virologic and immunologic response. AIDS. 1999;13:1099-1107.
33. Kirkland LR, Fischl MA, Tashima KT, et al. Response to lamivudine-zidovudine plus abacavir twice daily in antiretroviral-naïve, incarcerated patients with HIV infection taking directly observed treatment. Clin Infect Dis. 2002;34:511-518.
34. Avants SK, Margolin A, Warburton LA, et al. Predictors of nonadherence to HIV-related medication regimens during methadone stabilization. Am J Addict. 2001;10:69-78.
35. Catz SL, Kelley JA, Bogart LM, et al. Patterns, correlates, and barriers to medication adherence among persons prescribed new treatments for HIV disease. Health Psychol. 2000;19:124-133.
36. Mihalko SL, Brenes GA, Farmer DF, et al. Challenges and innovations in enhancing adherence. Control Clin Trials. 2004;25:447-457.
37. Arnsten JH, Demas PA, Farsadegan H, et al. Antiretroviral therapy adherence and viral suppression in HIV-infected drug users: comparison of self-report and electronic monitoring. Clin Infect Dis. 2001;33: 1417-1423.
38. Walsh JC, Mandalia S, Gazzard BG. Responses to a 1 month self-report on adherence to antiretroviral therapy are consistent with electronic data and virological outcome. AIDS. 2002;16:269-277.
39. Kleeberger CA, Phair JP, Strathdee SA, et al. Effect of computer-assisted self-interviews on reporting of sexual HIV risk behaviours in a general populations sample: a methodological experiment. J Acquir Immune Defic Syndr. 2001;26:82-92.
40. Knobel H, Alonso J, Casado JL, et al. Validation of a simplified medication adherence questionnaire in a large cohort of HIV-infected patients: the GEEMA Study. AIDS. 2002;16:605-613.
41. Moatti JP, Spire B. Living with HIV/AIDS and adherence to antiretroviral treatments. In: Moatti J-P, Souteyrand Y, Prieur A, et al eds. AIDS in Europe: New Challenges for the Social Sciences. New York: Routledge; 2000:57-73.
42. Murri R, Ammassari A, Gallicano K, et al. Patient-reported nonadherence to HAART is related to protease inhibitor levels. J Acquir Immune Defic Syndr. 2000;24:123-128.
43. Nieuwkerk PT, Sprangers MA, Burger DM, et al. Limited patient adherence to highly active antiretroviral therapy for HIV-1 infection in an observational cohort study. Arch Intern Med. 2001;161: 1962-1968.
44. Samet JH, Sullivan LM, Traphagen ET, et al. Measuring adherence among HIV-infected persons: is MEMS consummate technology? AIDS Behav. 2001;5:21-30.
45. Altice FL, Mostashari F, Friedland GH. Trust and the acceptance of and adherence to antiretroviral therapy. J Acquir Immune Defic Syndr. 2001;28:47-58.
46. Holzemer WL, Corless IB, Nokes KM, et al. Predictors of self-reported adherence in persons living with HIV disease. AIDS Patient Care STDS. 1999;13:185-197.
47. Horne R. Patients' beliefs about treatment: the hidden determinant of treatment outcome? J Psychosom Res. 1999;47:491-495.
48. Gao X, Nau DP, Rosenbluth SA, et al. The relationship of disease severity, health beliefs and medication adherence among HIV patients. AIDS Care. 2000;12:387-398.
49. Singh N, Berman SM, Swindells S, et al. Adherence of human immunodeficiency virus-infected patients to antiretroviral therapy. Clin Infect Dis. 1999;29:824-830.
50. Bellg AJ. Maintenance of health behaviour change in preventive cardiology: internalization and self-regulation of new behaviours. Behav Modif. 2003;27:103-131.
51. Anonymous. Program increases HAART adherence in HIV patients. AIDS Alert. 1999;1:31-32.
52. DiIorio C, Resnicow K, McDonnell M, et al. Using motivational interviewing to promote adherence to antiretroviral medications: a pilot study. J Assoc Nurses AIDS Care. 2003;14:52-62.
53. Fairley CK, Levy R, Rayner CR, et al. Randomized trial of an adherence programme for clients with HIV. Int J STD AIDS. 2003;14:805-809.
54. Goujard C, Bernard N, Sohier N, et al. Impact of a patient education program on adherence to HIV medication. J Acquir Immune Defic Syndr. 2003;34:191-194.
55. Holzemer WL, Henry SB, Portillo CJ, et al. The client adherence profiling-intervention tailoring (CAP-IT) intervention for enhancing adherence to HIV/AIDS medications: a pilot study. J Assoc Nurses AIDS Care. 2000;11:36-44.
56. Lyon ME, Trexler C, Akpan-Townsend C, et al. A family group approach to increasing adherence to therapy in HIV-infected youths: results of a pilot project. AIDS Patient Care STDS. 2003;17: 299-308.
57. Malow RM, McPherson S, Klimas N, et al. Alcohol and drug abuse: adherence to complex combination antiretroviral therapies by HIV-positive drug abusers. Psychiatr Serv. 1998;49:1021-1024.
58. Mann T. Effects of future writing and optimism on health behaviors in HIV-infected women. Ann Behav Med. 2001;23:26-33.
59. Molassiotis A, Lopez NV, Chung WY, et al. A pilot study of the effects of a behavioural intervention on treatment adherence in HIV-infected patients. AIDS Care. 2003;15:125-135.
60. Murphy DA, Lu MC, Martin D, et al. Results of a pilot intervention trial to improve antiretroviral adherence among HIV-positive patients. J Assoc Nurses AIDS Care. 2002;13:57-69.
61. Pradier C, Bentz L, Spire B, et al. Efficacy of an educational and counseling intervention on adherence to highly active antiretroviral therapy: French prospective controlled study. HIV Clin Trials. 2003;4:121-131.
62. Rawlings MK, Thompson MA, Farthing CF, et al. Impact of an educational program on efficacy and adherence with a twice-daily lamivudine/zidovudine/abacavir regimen in underrepresented HIV-infected patients. J Acquir Immune Defic Syndr. 2003;34:174-183.
63. Safren SA, Otto MW, Worth JL, et al. Two strategies to increase adherence to HIV antiretroviral medication: Life-steps and medication monitoring. Behav Res Ther. 2001;39:1151-1162.
64. Safren SA, Hendriksen ES, DeSousa N, et al. Use of an on-line pager system to increase adherence to antiretroviral medications. AIDS Care. 2003;15:787-793.
65. Smith SR, Rublein JC, Marcus C, et al. A medication self-management program to improve adherence to HIV therapy regimens. Patient Educ Couns. 2003;50:187-199.
66. Stenzel MS, McKenzie M, Mitty JA, et al. Enhancing adherence to HAART: a pilot program of modified directly observed therapy. AIDS Reader. 2001;11:317-328.
67. Tesoriero J, French T, Weiss L, et al. Stability of adherence to highly active antiretroviral therapy over time among clients enrolled in the treatment adherence demonstration project. J Acquir Immune Defic Syndr. 2003;33:484-493.
68. Mitty JA, McKenzie M, Stenzel M, et al. Modified directly observed therapy for treatment of human immunodeficiency virus [research letter]. JAMA. 1999;282:1334.

antiretroviral therapy/highly active antiretroviral therapy; HIV/AIDS; adherence intervention; research synthesis; meta-analysis

© 2006 Lippincott Williams & Wilkins, Inc.