Skip Navigation LinksHome > August 15, 2014 - Volume 66 - Issue > Enhancing Reporting of Behavior Change Intervention Evaluati...
JAIDS Journal of Acquired Immune Deficiency Syndromes:
doi: 10.1097/QAI.0000000000000231
Supplement Article

Enhancing Reporting of Behavior Change Intervention Evaluations

Abraham, Charles DPhil*; Johnson, Blair T. PhD; de Bruin, Marijn PhD; Luszczynska, Aleksandra PhD§,‖

Open Access
Supplemental Author Material
Article Outline
Collapse Box

Author Information

*University of Exeter Medical School, University of Exeter, Exeter, United Kingdom;

Center for Health, Intervention, and Prevention, University of Connecticut, Storrs, CT;

Health Psychology Group, Institute of Applied Health Sciences, University of Aberdeen, Aberdeen, United Kingdom;

§University of Social Sciences and Humanities, Warsaw, Poland; and

Trauma, Health, & Hazards Center, University of Colorado at Colorado Springs, Colorado Springs, CO.

Correspondence to: Charles Abraham, DPhil, University of Exeter Medical School, University of Exeter, St Luke's Campus, Exeter EX1 2LU, United Kingdom (e-mail: c.abraham@exeter.ac.uk).

Supported partially by the United Kingdom National Institute for Health Research (NIHR) Collaboration for Leadership in Applied Health Research and Care of the South West Peninsula (PenCLAHRC), but the views expressed in this article are those of the authors and not necessarily those of NIHR or the UK Department of Health. The work was also facilitated by United States Public Health Service grant R01-MH58563 to B.T.J.

The authors have no funding or conflicts of interest to disclose.

Supplemental digital content is available for this article. Direct URL citations appear in the printed text and are provided in the HTML and PDF versions of this article on the journal's Web site (www.jaids.com).

This is an open access article distributed under the terms of the Creative Commons Attribution-Noncommercial No Derivative 3.0 License, which permits downloading and sharing the work provided it is properly cited. The work cannot be changed in any way or used commercially.

Collapse Box

Abstract

Abstract: Many behavior change interventions for the prevention and treatment of HIV have been evaluated, but suboptimal reporting of evaluations hinders the accumulation of evidence and the replication of interventions. In this article, we address 4 practices contributing to this problem. First, detailed descriptions of the interventions and their implementation are often unavailable. Second, content of active control group content (such as usual care or support designed by researchers) often varies markedly between trials; yet, descriptions of this content are routinely omitted. Third, detailed process evaluations revealing the mechanisms by which interventions generate their effects, and among whom, frequently are not available. Fourth, there is a lack of replication in other contexts, which limits knowledge of external validity. This article advances recommendations made by an international group of scholars constituting the Workgroup for Intervention Development and Evaluation Research (WIDER), which has developed brief guidance to journal editors to improve the reporting of evaluations of behavior change interventions, thereby serving as an addition to reporting statements such as CONSORT. Improved reporting standards would facilitate and accelerate the development of the science of behavior change and its application in implementation science to improve public health.

Back to Top | Article Outline

INTRODUCTION

Reducing the spread of HIV and optimizing the impact of antiretroviral treatment for people living with HIV relies on the development of interventions to change behavior that are efficacious and can be incorporated into everyday practices. These interventions can range from messages on unsafe sexual practices in the schools of sub-Saharan Africa to counseling for sero-discordant men who have sex with men couples in HIV clinics in northern Europe. Practical, sustainable interventions and training programs for those who deliver interventions must first be developed and rigorously evaluated before they are disseminated. Four resolvable problems impede progress toward the development, replication, and synthesis of efficacious behavioral interventions: (1) information about design and implementation of the intervention lack sufficient detail to allow faithful replication of the intervention, (2) the widespread absence of detailed descriptions of the support provided to the active control groups (eg, “usual care”) against which intervention efficacy is assessed, (3) the paucity of detailed process evaluations capable of identifying mechanisms by which interventions generate their effects, and (4) the lack of replications in other contexts, which would reveal how context affects the success of interventions.

In this article, we address each of these problems and propose solutions, with particular emphasis on more thorough reporting requirements for intervention evaluations. In this regard, we draw upon the recommendations made by an international group of scholars constituting the Workgroup for Intervention Development and Evaluation Research (WIDER).

Back to Top | Article Outline

INACCESSIBILITY OF DETAILED BEHAVIOR CHANGE INTERVENTION MANUALS

As Table 1 summarizes, poor descriptions of the content of behavior change interventions (BCIs) result in difficulty replicating and explaining heterogeneity in trial results and in failures when translating trial results to community sites. In fact, behavior change manuals, describing the content and implementation of interventions in HIV prevention and management are more available than those in other behavioral domains, because of the efforts of the United States Centers for Disease Control and Prevention (CDC). Their HIV/AIDS Prevention Research Synthesis Project has conducted efficacy reviews since 1996, identifying evidence-based interventions that have “best” or “good” supporting evidence. A portion of these evidence-based interventions are packaged as part of CDC's Diffusion of Evidence-Based Interventions Project; their Web site provides treatment manuals for these interventions at no cost. Unfortunately, to date, meta-analyses have not fully used these archived manuals. Specifically, reviews and meta-analyses regularly fail to fully identify the active content of interventions (and control groups) that generate differences in measures of efficacy and effect size across efficacy trials.1

Table 1
Table 1
Image Tools

Unfortunately, freely available treatment manuals are not the norm. Instead, researchers, practitioners, and reviewers usually must write to authors of BCI evaluations requesting manuals, including those for HIV-relevant behavioral interventions that are not provided by the Diffusion of Evidence-Based Interventions Project. Although some authors readily supply such manuals, others decline to do so for a variety of reasons, including not having prepared detailed manuals or having, lost the records—sometimes just a few years after publication—and proprietary reasons, as some authors view manuals as commercial property not to be shared with rival research groups or else to be sold at considerable cost. Consequently, attempts to collect BCI manuals from published evaluations, other than those provided by the CDC, may result in very low retrieval rates. The following 4 quotations are taken from e-mails sent to 2 of the authors when collecting manuals for 2 separate reviews. They are by no means unusual and many such requests remain unanswered.

If you want to know about our intervention, please see the book on self-efficacy by Bandura.

Find attached a PowerPoint slide. The notes below the slide offer the best description of the intervention I have.

I am sorry to say that at this point… some 20 years after data collection on this project, I do not have any additional records.

Thank you for your interest in the intervention we implemented some years back [NB. In fact, it was less than 3 years after publication of the intervention evaluation]… All of the information regarding the intervention was provided in the reference article you cited so I am not able to provide you with additional documentation.

The resultant lack of detailed manuals to facilitate replication and implementation can lead to difficulties in distinguishing between, (1) the efficacy of interventions (ie, preliminary effects are established in a single controlled trial, delivered to a specific audience) from their effectiveness (ie, interventions delivered under real-world conditions, addressing a broader or different population, with benefits proved to outweigh costs).2 Without such manuals, the effectiveness of the interventions in the “real world” cannot be established. Many published, effective interventions can no longer be faithfully replicated, and, thus, a substantial portion of the science of behavior change is being lost each year.

Imagine, by comparison, a chemist who published an article reporting an experiment in which a new and useful compound was created but who later declared that (s)he had not kept adequate laboratory notes or was not willing to share the procedures with other scientists. This practice is not acceptable in chemistry and should not be acceptable in behavioral sciences. Quite apart from the scientific obligations involved, important ethical issues arise in relation to BCIs developed using public resources that cannot later be replicated or implemented for public benefit.

Among manuals that are available, content varies greatly. Guidelines on what should be included in reports of interventions (eg, Consolidated Standards of Reporting Trials [CONSORT]3,4 and Transparent Reporting of Evaluations with Nonrandomized Designs [TREND]5) have shaped editorial policy so as to enhance and standardize the details made available in recently published trials. Yet, although CONSORT guidance calls for “precise details of the interventions intended for each group and how and when they were actually administered,”3 editorial focus is often limited to ensuring detailed descriptions of evaluation methods (eg, trial procedures) rather than of intervention implementation. As Davidson et al6 correctly noted, “Often reports fail to describe the actual behavioral intervention techniques used; instead they provide details regarding treatment format (eg, the number of sessions, type of treatment).”

Davidson et al6 helpfully augmented previous reporting guidelines by proposing that BCI evaluation reports should include details of (1) the content or elements of the intervention, (2) characteristics of the those delivering the intervention, (3) characteristics of the recipients, (4) the setting (eg, worksite), (5) the mode of delivery (eg, face-to-face), (6) the intensity (eg, contact time), (7) the duration (eg, number of sessions over a given period), and (8) adherence to delivery protocols. Without exact and detailed reporting on these characteristics, reported results of even the most promising trial could never lead to work establishing that an intervention does more good than harm under real-world conditions and across populations, communities, and cultures. Further standardization of reporting of BCI content is a sine qua non condition for translation of evidence into practice. This goal is particularly important and challenging in complex interventions targeting multiple outcomes such as HIV prevention programs promoting testing, medication adherence and standards of care simultaneously.7,8 Such standardization could greatly accelerate advancement of the science of behavior change and its applications.

Back to Top | Article Outline

FAILURE TO CONSIDER CONTENT AND SUPPORT PROVIDED TO CONTROL CONDITIONS

Control groups used in evaluations may take the form of no intervention or of active control groups that receive “usual/standard care,” a planned control intervention (eg, brief advice), or a mix of these.9 As in experimental interventions, the impact of active control group interventions on outcome measures can vary considerably between trials and, consequently, determine the effect sizes generated in a trial.10,11 Hence, interpretation, comparison, and generalizability of trial effects depend on a clear understanding of the type of control group, any active content to which the control was exposed, and of the intervention to which it is compared. Although CONSORT guidance states that the support provided to control groups should be reported in similar detail as the interventions4 authors typically indicate only “received usual care,” “were given brief advice,” or the like. This practice has critical implications for understanding BCI trials.

De Bruin et al10,11 illustrated this issue clearly in a systematic review and meta-analysis of interventions to promote adherence to highly active antiretroviral therapy. Of the 34 randomized controlled trials included, only 1 study reported the standard/usual care provided to the control group in sufficient detail to allow coding of its active content. To obtain the information for the other 33 randomized controlled trials, the review authors developed a standard care checklist and sent it to the trial authors (with a 95% response rate). Some 30% of the responding authors could not complete this checklist because they did not know what support the control group received (and, thus, did not know against what the success of the intervention was compared). For the other studies, the completed checklist was coded and a quantitative score was computed (low- to high-capacity standard care). The capacity scores varied considerably among studies, which predicted considerable differences in the adherence and treatment success rates in the control groups (eg, viral suppression rates in control groups receiving optimal versus minimal standard care differed by 34 percentage points) and, consequently, in trial effect sizes.10,11

In fact, the effect sizes reported by some BCI evaluations could have been up to twice as large if those interventions had been compared with the lowest capacity usual care observed. Alternatively, some intervention effects could have been up to 4 times smaller if they were compared with the highest capacity of care observed in other trials. The implication is, clearly, that interpreting, comparing, and generalizing effects of experimental treatments when levels of control group content vary between trials requires the systematic evaluation of the support provided to both intervention and control groups. Because usual care (“control”) is rarely assessed, and even planned control group interventions (eg, brief advice) are not adequately reported, many reviews and meta-analyses of BCI evaluations fail to account for control group variability and, thus, are likely to yield less dependable findings about the relative success of BCI and their most effective components. To identify potentially effective interventions and advance the science of behavior change, describing the content of active control interventions, such as usual care, is just as important as describing the content of interventions themselves.

Making this change in trial reporting practices may also challenge intervention designers to specify more clearly how proposed experimental interventions will improve upon the best available usual care already delivered routinely. Indeed, as Table 1 (rows 1 and 2) summarizes, failure to provide detailed reports of the content of interventions and the active control conditions to which they compared contributes to difficulty in trial replication and explains, in part, why meta-analyses so frequently document unknown sources of heterogeneity in study results.12

Back to Top | Article Outline

PAUCITY OF PROCESS EVALUATIONS ILLUMINATING PROCESSES UNDERPINNING CHANGE

Rothman13 called for closer integration of basic and applied behavioral science, arguing that BCI designers should apply research into change mechanisms to intervention design and should use process evaluations measuring change mechanisms to enable testing of psychological theory in practice. A decade later, his call remains relevant. Many BCIs are either not evaluated at all or are not evaluated using outcome measures that are relevant to the design of services or policy.14 Even when outcome evaluations relevant to policy and practice are undertaken, they often fail to assess change in the processes or mechanisms underpinning behavior change, thereby failing to distinguish among various theoretical formulations of change processes.

A core aspect of process evaluation is testing the change mechanisms specified in BCI design.15–17 Testing change mechanisms depends on constructing a logic model early in the BCI design process. Intervention mapping (IM) identifies 6 iterative stages in BCI design and evaluation.18 First, a needs assessment determines what (if anything) needs to be changed and for whom. Second, primary and secondary intervention objectives are defined, which involves precisely specifying behavior changes that participants will be expected to make. Third, identification of underlying mechanisms that maintain current (unwanted) behavior patterns and of those that may generate specified changes leads to selection of techniques known to change those mechanisms. Fourth, having identified evidence-based behavior change techniques relevant to the intervention's behavioral objectives, practical ways of delivering these techniques are developed. Fifth, implementation planning requires anticipating how the intervention will be used or delivered in everyday contexts (for example, is the intervention acceptable, practical and sustainable?). The final stage is evaluation: Does the intervention change the specified behaviors in context? These stages are iterative in that, for example, anticipation of implementation may lead to change in design and a return from stage 5 to stage 4. Similarly, evaluation measures are anticipated when the expected behavior changes are specified in stage 2.

Unfortunately, many BCI designs do not specify targeted change processes (IM stage 3) and, consequently, their evaluations (IM stage 6) do not include measures of the mechanisms targeted to change behavior. For example, in 1 review19 only 42% (of 107) intervention evaluations measured theoretically specified behavioral antecedent, such as a belief, motivation, or skill (including self-regulatory skills) that had changed significantly at pre- and posttest. Less than 4% showed that a reported effect of the intervention on outcomes was mediated by changes in specified mechanisms.19 Without specification of mechanisms of change in the logic (or mechanism or program) model (IM stage 3) and measurement of those mechanisms in the evaluation, even an impeccably designed and conducted outcome evaluation cannot inform researchers about how the intervention worked or failed to work (Table 1, row 3). For example, if an intervention is intended to change motivation by changing participants' beliefs about what others are doing or about what their peers approve (eg, “Do other people of your age and gender use condoms and approve of their use?”), then it is critical to know if such beliefs changed relative to a no-intervention control group.

This knowledge is important regardless of the effectiveness of the intervention. For example, in the case of a failed intervention, we need to know whether it was ineffective because the change techniques and delivery methods chosen failed to alter the targeted regulatory processes, which represents a failure of intervention design and implementation. Alternatively, was the intervention ineffective because, while the mechanisms specified in the logic model changed, those changes did not lead to the theorized behavior change? Such an instance represents a failure to adequately theorize the mechanisms responsible for the targeted behavior pattern.

In summary, to make valid conclusions about the effects of an intervention, 2 types of measures should be used: (1) measures of changes in behavior, and (2) measures of changes in antecedents of behavior capable of explaining behavior change (such as changes in skills, beliefs, or knowledge). Ideally, change mechanisms would be specified in intervention logic models, and process evaluations would assess whether these mechanisms changed. When this assessment is combined with outcome evaluations clarifying whether the interventions changed behavior, data synthesis can highlight which theorized mechanisms are relevant to particular behavior change domains—or across domains.

For example, in a meta-analysis of BCIs for HIV prevention, Albarracín et al20 found that the most successful interventions to increase condom use provided (1) information, (2) arguments to promote positive attitudes toward condom use, (3) behavioral skills relevant to condom use, and (4) self-regulatory skills training. By contrast, inclusion of threat or fear appeals failed to improve condom use. Albarracín et al also found that some approaches worked with one target group but not another. For example, arguments addressing normative beliefs were more successful when the audience was less than 21 years of age than among older audiences. In this case, age moderated the relationship between inclusion of normative arguments and intervention efficacy.21

Similarly, applying the cross-domain taxonomy of change techniques developed by Abraham and Michie,22 and Michie et al23 found that prompting self monitoring, combined with other techniques designed to promote goal setting and enhance self-regulatory skills (derived from Carver and Scheier's control theory)24 was associated with more healthy eating and increased physical activity than those that omitted these elements. Represented as a standardized mean difference (SMD), interventions that included these specific change techniques were more successful (0.42) than those that did not include them (0.26). Likewise, Webb and Sheeran25 found that interventions including information provision, goal setting, modeling, and skill training yielded small to medium effects on behavior change (with SMDs close to 0.3), whereas interventions including use of contingent rewards and provision of social support had larger effects (SMDs between 0.5 and 0.6).

A final example helps to highlight the point that mechanisms of change are not necessarily dependent on theory-driven BCI content. Lennon et al26 found that BCIs for HIV prevention in samples of women that more successfully reduced depression also were more successful in reducing HIV risk. Indeed, on average, BCIs that failed to reduce depression did not reduce HIV risk at all. These trials almost never explicitly addressed depression as an intervention component; yet, depression routinely improved. Identifying and coding BCI content from intervention descriptions did not explain this link between BCI effects on depression and HIV risk.

In summary, precise specification of change techniques designed to alter identified regulatory mechanisms in logic models of intervention and active control content would allow greater precision in identifying what works, for whom, and by what change processes. Doing so would, in turn, greatly accelerate the development of a science of behavior change (Table 1, row 3). Of course, assessment of mechanism is not the only purpose of process evaluations. Another key purpose is identifying the contextual factors that may influence intervention success or failure. Measuring differences in contexts and designing trials that systematically vary contextual factors is critical to developing an understanding of the generalizability of effective BCIs.

Back to Top | Article Outline

LACK OF REPLICATIONS IN DIVERSE CONTEXTS

Replication is critical to the advancement of science, including behavioral science. Replications confirming previous findings strengthen our faith in the reality and generalizability of an effect. Failures to replicate highlight the fragility or context-specific nature of an effect or may refute models of the mechanism in question.27,28 BCIs may be effective in one context but not another (Table 1, row 4). For example, several meta-analyses have demonstrated that, on average, BCIs to reduce risk of HIV achieve greater success in trials conducted in poorer nations than in richer nations.29–31 Similarly, whereas sending survey questionnaires (without persuasive messages) to blood donors in Canada resulted in substantial increases in blood donations,32 attempts to replicate this effect in a different cultural context, in the Netherlands, failed to yield additional donations.33

Accurate replication and, therefore, implementation, depend on full disclosure of BCI design, including logic models, change techniques, and delivery methods employed as well as full details of evaluation processes.27 If readers are unable to obtain sufficient information to replicate a BCI accurately, the reported design and evaluation work is lost to science. Readers are left knowing that something worked—or did not work—but not exactly what, or how, or for whom.

Besides obtaining evidence for the efficacy of an intervention in the original trials (or even obtaining robust evidence for effectiveness of an intervention under real-world conditions), transparent reporting of intervention protocols is essential for progress in translational and implementation research.2 Absence of a clear protocol or manual to guide implementation is a critical flaw in intervention reporting; it may well lead to problems and limitations in effective implementation: Evaluations of costs are difficult and practitioners and other stakeholders may not be able to assess the clinical meaningfulness and novelty of an intervention. Moreover, under such circumstances, it is difficult or impossible to plan for adequate human resources (such as skills, time, and integration with other practices) and environmental resources (such as setting characteristics, financial costs, and the use of existing community resources).

In addition, transferability across populations and contexts depend strongly on protocol/manual quality. Dimensions that make interventions more implementable include clear and simple language, accurate and appealing graphical elements, high visibility and vivid presentation, reproducibility and flexibility of techniques, compatibility with other components of practice, usability, and compatibility with patients' and practitioners' preferences.34 Without detailed intervention manuals, it is not possible to evaluate how well an intervention has been implemented and, hence, difficult to assess the generalizability of effects, which prohibits evidence-based guidance to stakeholders on adoption of the intervention and assessment of its effectiveness across contexts.

Back to Top | Article Outline

CONCLUSIONS

Application of the WIDER Recommendations

Following the annual meeting of the European Society of Health Psychology held in 2007 in Maastricht, the Netherlands, 32 international scholars, including editors of 11 health-related peer-reviewed journals, issued a consensus statement entitled the “Workgroup for Intervention Development and Evaluation Research (WIDER) Recommendations (see Supplemental Digital Content, http://links.lww.com/QAI/A543). The summary statement of that consensus appears in Table 2.35

Table 2
Table 2
Image Tools

The WIDER recommendations address 4 issues:

1. Editors of scientific journals should ensure that BCI evaluations comply fully with the extended CONSORT statements for reporting of trials of nonpharmacological treatments3 by providing standardized descriptions of intervention characteristics that allow accurate replication across contexts.

2. This goal should be supported by clarification of the following: (1) the change processes considered necessary to prompt a change in the specified behavior(s), (2) how the intervention design was informed by theoretical considerations or models of causal or regulatory processes, and (3) what mechanism-based change techniques were included. The last element is particularly important because such techniques constitute the unique (and potentially active) ingredients of a BCI.

3. Even when BCI evaluation reporting meets the standards outlined above, detailed information about materials and implementation cannot typically be included in the limited space available in scientific journals. This information must be included in protocols or manuals describing intervention implementation. Unfortunately, as noted above, such manuals often are not available after BCI evaluations are published. Consequently, WIDER recommended that detailed BCI manuals, including in-context implementation procedures, be published (eg, on a journal Web site) at the same time as BCI evaluation reports. Some journals have already adopted this practice (eg, Addiction).36

4. Finally, as highlighted above, WIDER recommended that such manuals also include the details of any services or care provided to control groups, such as usual care, included in BCI intervention evaluations.

Drawing on the WIDER consensus statement, Albrecht et al37 developed a checklist to assess the extent to which BCI evaluation reports met the reporting criteria specified by WIDER. This tool could prove useful in guiding reviewers of BCI evaluation submissions. Supplementing this tool, Hoffmann et al38 developed the Template for Intervention Description and Replication (TIDieR) checklist and guide, which include more detailed reporting of intervention content. To the extent that subsequent reports of interventions report these dimensions, accounts of interventions are likely to improve, reviewers and editors will be better able to evaluate the descriptions, and readers should be better able to use the information.

Adoption of the procedures described above including the 4 WIDER recommendations by journal editors will enhance accessibility to BCI evaluation research, facilitate accurate replication, improve transferability across contexts and implementation and, thereby, increase the impact of such research on individual and public health, as exemplified in contemporary implementation science.39 Doing so complements current CONSORT guidance as well as guidance on the reporting of meta-analyses (eg, Preferred Reporting Items for Systematic Reviews and Meta-Analyses, PRISMA, http://www.prisma-statement.org/) and observational studies (eg, STrengthening the Reporting of OBservational studies in Epidemiology, STROBE, http://www.strobe-statement.org/). In the absence of improved standards of reporting addressing the 4 issues highlighted in this article, many opportunities for improving and sustaining behavior change interventions will continue to be lost to science and practice.

Back to Top | Article Outline

REFERENCES

1. Johnson BT, Michie S, Snyder LB. Effects of behavioral intervention content on HIV prevention outcomes: a meta-review of meta-analyses. J Acquir Immune Defic Snydr. 2014;66(suppl 3):S259–S270.

2. Glasgow RE, Emmons KM. How can we increase translation of research into practice? Types of evidence needed. Annu Rev Public Health. 2007;28:413–433.

3. Moher D, Schulz KF, Altman DG. The CONSORT statement: revised recommendations for improving the quality of reports of parallel group randomized trials. Ann Int Med. 2001;34:657–662.

4. Boutron I, Moher D, Altman DG, et al.. Extending the CONSORT statement to randomized trials of nonpharmacologic treatment: explanation and elaboration. Ann Int Med. 2008;148:295–309.

5. Des Jarlais DC, Lyles C, Crepaz N. Improving the reporting quality of nonrandomized evaluations of behavioral and public health interventions: the TREND statement. Am J Public Health. 2004;94:361–366.

6. Davidson KW, Goldstein M, Kaplan RM, et al.. Evidence-based behavioral medicine: what is it and how do we achieve it? Ann Beh Med. 2003;26:161–171.

7. Hayes R, Sabapathy K, Fidler S. Universal testing and treatment as an HIV prevention Strategy: research questions and methods. Curr HIV Res. 2011;9:429–445.

8. Cori A, Ayles H, Beyers N, et al.. HPTN 071 (PopART): a cluster-randomized trial of the population impact of an HIV combination prevention intervention including universal testing and treatment: mathematical model. PLoS One. 2014;9:e84511.

9. Freedland KE, Mohr DC, Davidson KW. Usual and unusual care: existing practice control groups in randomized controlled trials of behavioral interventions. Psychosom Med. 2011;73:323–335.

10. de Bruin M, Viechtbauer W, Hospers HJ, et al.. Variability in standard care quality of HAART-adherence studies: implications for the interpretation and comparison of intervention effects. Health Psychol. 2009:28:668–674.

11. de Bruin M, Viechtbauer W, Schaalma HP, et al.. Standard care impact on effects of highly active antiretroviral therapy adherence interventions: meta-analysis of randomized controlled trials. Ann Intern Med. 2010;170:240–250.

12. Johnson BT, Scott-Sheldon LAJ, Carey MP. Meta-synthesis of health behavior change meta-analyses. Am J Public Health. 2010;100:2193–2198.

13. Rothman AJ. Is there nothing more practical than a good theory? why innovations and advances in health behavior change will arise if interventions are used to test and refine theory. Int J Behav Nutr Phys Act. 2004;1:11. Available at: http://www.ijbnpa.org/content/1/1/11. Accessed April 19, 2014.

14. House of Lords, Science and Technology Committee. Behaviour Change. London, United Kingdom: Her Majesty's Stationery Office; 2011. Available at: http://www.publications.parliament.uk/pa/ld201012/ldselect/ldsctech/179/17906.htm#a10. Accessed April 19, 2014.

15. Craig P, Dieppe P, Macintyre S, et al.. Developing and evaluating complex interventions: the new medical research council guidance. Br Med J. 2008;337:a1655.

16. Grant A, Treweek S, Dreischulte T, et al.. Process evaluations for cluster-randomised trials of complex interventions: a proposed framework for design and reporting. Trials. 2013;14:.

17. Moore G, Audrey S, Baker M, et al.. Process evaluation in complex public health intervention studies: the need for guidance. J Epidemiol Community Health. 2014;68:101–102.

18. Bartholomew LK, Parcel GS, Kok G, et al.. Planning Health Promotion Program: An Intervention Mapping Approach. 3rd ed. San Francisco, CA: Jossey-Bass; 2011.

19. Prestwich A, Sniehotta FF, Whittington C, et al.. Does theory influence the effectiveness of health behavior interventions? Meta-analysis. Health Psychol. 2014;33:465–474.

20. Albarracín D, Gillette CJ, Earl AN, et al.. A test of major assumptions about behavior change: a comprehensive look at the effects of passive and active HIV prevention interventions since the beginning of the epidemic. Psychol Bull. 2005;131:856–897.

21. Baron RM, Kenny DA. The moderator-mediator variable distinction in social psychological research: conceptual, strategic and statistical considerations. J Pers Soc Psychol. 1986;51:1173–1182.

22. Abraham C, Michie S. A taxonomy of behavior change techniques used in interventions. Health Psychol. 2008;27:379–387.

23. Michie S, Abraham C, Whittington C, et al.. Identifying effective techniques in interventions: a meta-analysis and meta-regression. Health Psychol. 2009;28:690–701.

24. Carver CS, Scheier MF. Control theory: a useful conceptual framework for personality-social, clinical and health psychology. Psychol Bull. 1998;92:111–135.

25. Webb TL, Sheeran P. Does changing behavioral intentions engender behavior change? A meta-analysis of the experimental evidence. Psychol Bull. 2006;132:249–268.

26. Lennon CA, Huedo-Medina TB, Gerwien DP, et al.. A role for depression in sexual risk reduction for women? A meta-analysis of HIV prevention trials with depression outcomes. Soc Sci Med. 2012;75:688–698.

27. Peters G-JY, Abraham C, Crutzen R. Full disclosure: doing behavioral science necessitates sharing. Eur Health Psychol. 2012;14:77–84.

28. Ritchie SJ, Wiseman R, French CC. Failing the future: three unsuccessful attempts to replicate Bem's “retroactive facilitation of recall” effect. PLoS One. 2012;7:e33423.

29. Huedo-Medina TB, Boynton MH, Warren MR, et al.. Efficacy of HIV prevention interventions in Latin American and Caribbean nations, 1995–2008: a meta-analysis. AIDS Behav. 2010;14:1237–1251.

30. LaCroix JM, Snyder LB, Huedo-Medina TB, et al.. Effectiveness of mass media interventions for HIV prevention, 1986–2013: a meta-analysis. J Acquir Immune Defic Syndr. 2014;66(suppl 3):S329–S340.

31. Tan JY, Huedo-Medina TB, Warren MR, et al.. A meta-analysis of the efficacy of HIV/AIDS prevention interventions in Asia, 1995–2009. Soc Sci Med. 2011;75:676–687.

32. Godin G, Sheeran P, Conner M, et al.. Asking questions changes behavior: mere measurement effects on frequency of blood donation. Health Psychol. 2008;27:179–184.

33. van Dongen A, Abraham C, Ruiter R, et al.. Does questionnaire distribution promote blood donation? An investigation of question-behavior effects. Ann Behav Med. 2013;45:163–172.

34. Kastner M, Makarski J, Hayden L, et al.. Making sense of complex data: a mapping process for analyzing findings of a realist review on guideline implementability. BMC Med Res Methodol. 2013;13:112.

35. Abraham C. Designing and evaluating interventions to change health-related behavior patterns. In: Boutron I, Ravaud P, Moher D, eds. Randomized Clinical Trials of Nonpharmacologic Treatments. Chapman & Hall; Boca Raton, FL; 2012:357–368.

36. West R. Providing full manuals and intervention descriptions: addiction policy. Addiction. 2008;103:1411.

37. Albrecht L, Archibald M, Arseneau D, et al.. Development of a checklist to assess the quality of reporting of knowledge translation interventions using the Workgroup for Intervention Development and Evaluation Research (WIDER) recommendations. Implement Sci. 2013;8:52.

38. Hoffmann TC, Glasziou PP, Boutron I, et al.. Better reporting of interventions: template for intervention description and replication (TIDieR) checklist and guide. Br Med J. 2014;348.

39. Padian NS, Holmes CB, McCoy SI, et al.. Implementation science for the US President's Emergency Plan for AIDS Relief (PEPFAR). J Acquir Immune Defic Syndr. 2011;56:199–203.

Keywords:

behavior change; evaluation; process evaluation; reporting standards; randomized controlled trials

Supplemental Digital Content

Back to Top | Article Outline

© 2014 by Lippincott Williams & Wilkins

Login

Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.