Skip Navigation LinksHome > January 2014 - Volume 28 - Issue > The role of mathematical modelling in the development of rec...
Text sizing:
A
A
A
AIDS:
doi: 10.1097/QAD.0000000000000111
Supplement Articles

The role of mathematical modelling in the development of recommendations in the 2013 WHO consolidated antiretroviral therapy guidelines

Easterbrook, Philippa J.; Doherty, Meg C.; Perriëns, Joseph H.; Barcarolo, Jhoney L.; Hirnschall, Gottfried O.

Free Access
Open Access Icon
Article Outline
Collapse Box

Author Information

HIV Department, World Health Organization, Avenue Appia 20, 1211 Geneva 27, Switzerland Geneva, Switzerland.

Correspondence to Philippa J. Easterbrook, Geneva, Switzerland. E-mail: easterbrookp@who.int

Received 7 October, 2013

Revised 7 October, 2013

Accepted 7 October, 2013

Collapse Box

Abstract

Despite the exponential growth in the literature on modelling and simulation studies of impact and cost-effectiveness in different aspects of healthcare, there is no clear consensus on the appropriate role of modelling in the development of recommendations in clinical guidelines. This is compounded both by the lack of a standardised approach to assess the quality of modelling, and lack of clarity on its positioning within the GRADE (Grading of Recommendations, Assessment, Development, and Evaluation) method for decision-making in the development of WHO guidelines, that considers both evidence from systematic reviews of randomized clinical trials (RTCs) or observational studies, together with stakeholder values and preferences, resource use, and feasibility issues. In the development of the 2013 WHO Consolidated Guidelines on the use of Antiretroviral drugs for treating and preventing HIV infection, a series of modelling projects were undertaken to inform the recommendations on eligibility criteria for ART initiation, and approaches to monitoring for treatment response. We report our experiences, challenges encountered, and several key considerations to guide the future use of modelling in the development of WHO guidelines. These are: (1) Transparency in the conduct and reporting of model inputs and results; (2) The need for agreed standards for critical appraisal and use of modelling data in healthcare policy making; (3) recognition that modelling of cost-effectiveness is only one component of decision-making in development of WHO recommendations and in priority-setting; (4) The need for closer interaction and an ongoing dialogue between modellers and model end-users or decision-makers; (5) the important role of WHO in convening and facilitating comparative assessment of multiple models; and (6) The need to optimize research and data collection to inform modelling studies.
Back to Top | Article Outline

Introduction

One of WHO's core functions is to assess new evidence and innovations, and translate them into guidance that can inform and guide country decisions. WHO published its first antiretroviral therapy (ART) treatment guidelines more than a decade ago, with subsequent revisions culminating in updated guidelines in 2010, for use of ART in adults and adolescents [1], children [2], and pregnant women [3]. The key 2010 recommendation was an increase in the CD4+ cell count threshold for ART initiation from below 200 to below 350 cells/μl. Since then, important new evidence has emerged on the impact of earlier ART initiation on reducing HIV transmission, as well as on individual patient benefits in terms of reduced morbidity and mortality [4–6]. In addition, new HIV testing and counseling strategies and technologies have expanded opportunities for decentralizing HIV testing approaches [7,8], while more accessible viral load testing technologies for monitoring allow earlier identification of ART failure, and more timely switching of regimens.

The updated 2013 consolidated guidelines were developed in response to the need to reflect these advances [9,10]. Key issues addressed were whether to further increase the CD4+ cell count threshold for ART initiation to less than 500 cells/μl in HIV-infected adults or regardless of CD4+ count in specific subpopulations [pregnant women, serodiscordant couples, and hepatitis B virus (HBV) and/or hepatitis C virus (HCV) coinfected persons], as well as optimal approaches to monitoring patients on ART. The guidelines are based on a public health approach to scaling up antiretrovirals (ARVs) for treatment and prevention that seeks to maximize individual and population-level benefit, and their primary target audience is national HIV programme managers in low-income and middle-income settings. Therefore, in addition to specific clinical recommendations, the 2013 guidelines also provide guidance to countries in planning and setting priorities from a menu of policy options for ART use in different population groups, epidemiological settings, and levels of ART uptake [9,10].

Back to Top | Article Outline

The WHO antiretroviral guidelines development process and Grading of Recommendations, Assessment, Development and Evaluation methodology

The review of evidence and development of recommendations in the 2013 guidelines followed the Grading of Recommendations, Assessment, Development and Evaluation (GRADE) system, which emphasizes a structured, explicit and transparent approach to grading and formulation of recommendations, as well as a rating of the quality of evidence and strength of recommendations [11–14]. The GRADE system classifies the quality of evidence into one of four levels: high, moderate, low, and very low (Table 1). The rating of quality of evidence based on randomized controlled trials (RCTs) starts as high, but may be decreased for several reasons, including risk of bias, inconsistency of results, indirectness of evidence, imprecision and publication bias [11–13]. The rating of evidence based on observational studies starts as low, but may be increased if the magnitude of the treatment effect is very large, or if there is evidence of a dose–response relationship or if residual plausible biases would underestimate the effect size [14]. The GRADE system also classifies the strength of recommendations as either ‘strong’ and ‘conditional’. The nature and strength of recommendations is determined, in addition to the quality of the evidence, by an assessment of its acceptability by relevant stakeholders, resource implications and feasibility to implement in multiple settings.

Table 1
Table 1
Image Tools

Systematic reviews were commissioned and undertaken on around 40 topics relating to different aspects of HIV care to determine, among other issues, whether changes to existing recommendations on ART eligibility or in patient monitoring strategies were warranted. For the majority of these systematic renews, a standardized GRADE evidence table was used to present quantitative summaries of the evidence and assessment of its quality. In addition to the scientific literature, other inputs to the decision-making process for the development of specific recommendations, included the expected individual and population-level benefits and harms of potential recommendations; community values and preferences (informed by an e-survey consultation); feasibility and constraints to implementation (informed by commissioned reports on country experience on selected topics); equity, ethics and human rights implications; resource use, population-level health impact and cost–effectiveness [informed by available literature and results from mathematical modelling of different HIV testing and ART initiation scenarios, and patient monitoring strategies in various epidemic settings undertaken by the International HIV Modelling Consortium ( www.hivmodelling.org)].

Back to Top | Article Outline

The use of modelling in the development of guidelines

The optimal evidence to inform the development of recommendations and guidelines on specific treatment or preventive interventions is generally large high-quality RCTs, or systematic reviews of RCTs, on the question of interest, with formal rating of the quality of this evidence using the GRADE or a similar system. However, there are limitations to reliance on this evidence source alone. First, for many questions, there may be no RCTs or prospective studies, or if they do exist, data are based on a highly selected (usually adult) population from high-income settings with adequate resources and highly trained staff, and often with limited follow-up. Therefore, results from these existing studies may not be generalizable beyond their own contexts. In addition, policy makers need to consider the population-level impact over a much longer time horizon than is possible with RCTs. Finally, there may be many other permutations of an intervention (dose, frequency, and duration) or outcomes that are not adequately addressed by current research. Such ‘indirectness’ in evidence relating to population, intervention or outcomes is usually addressed through downgrading the GRADE quality of evidence rating by one or two levels (Table 1) [12,13], with an explicit comment on the limitations of the evidence.

To address these more complex questions, and where it may not be possible to extrapolate from existing studies, other forms of research synthesis such as mathematical modelling are increasingly used to estimate impact and cost–effectiveness, and project long-term policy consequences. Model-based methods may also be useful in performing comparative effectiveness and cost–effectiveness of a much broader potential range of interventions (or combinations of interventions), and over a range of timescales, settings and populations, than is usually possible with RCTs or even observational studies, and have been used increasingly in the evaluation of health programmes, and HIV care in particular [15–17]. There are many limitations in the use of models, but the most critical relate to the uncertainty of the data and the assumptions they employ, both of which impact on both the internal and external validity or generalizability of their conclusions. Model simulation studies must always be carefully evaluated in light of these and other limitations.

Despite the exponential growth in the literature on modelling and simulation studies of impact and cost–effectiveness in different aspects of healthcare, there is no clear consensus on the appropriate positioning and contribution of modelling in guidelines development. Perspectives range from those who believe that clinical guidance should be based solely on the empirical and observational evidence, to those who seek to incorporate modelling into the decision-making process of developing recommendations. However, the lack of a standardized methodology for use of modelling, either alone or as an extension of systematic reviews, has resulted in their limited use to date in the development of clinical guidelines.

There are a few recent examples, outside WHO, in which model simulation data have been used by the Agency for Healthcare Research and Quality (AHRQ) Evidence-based Practice Center (EPC) alongside systematic reviews [18–20], and also by the U.S. Preventive Services Task Force (USPSTF) in the development of updated cancer screening guidelines, specifically for colorectal, breast and cervical cancer [21]. In a recent review of 10 EPC reports that included modelling, seven used models to augment systematic review results, and three as the primary data source [19]. The main contributions of the modelling were considered to be in addressing gaps, extending evaluation of benefits and harms beyond intermediate outcomes, and offering comparisons of additional strategies. Other examples of the influence of cost–effectiveness modelling have been in the development of vaccination policy in high-income settings [21], and in the adoption of the Archimedes Model (a clinically detailed single integrated simulation model that covers coronary artery disease, diabetes and its complication congestive heart failure, stroke, hypertension, colorectal, breast and lung cancers) by the US Department of Health and Human Services for the rapid analysis and evaluation of the cost–effectiveness for specific healthcare interventions [22,23].

The main experience at WHO with the use of cost–effectiveness models in guidelines development has been in work led by the Initiative for Vaccine Research in the department of Immunization, Vaccines and Biologicals to support decision-making over the introduction of vaccines for pneumococcal, rotavirus and human papillomavirus vaccines [24–27]. Following a series of consultations in 2009 and 2010 with multiple modelling groups, a consensus report was published to guide future activities on the use of cost–effectiveness analyses for vaccines and immunizations [28–30]. More recently, in light of an ongoing pivotal phase 3 trial of a first generation malaria vaccine, RTS,S/AS01, WHO is evaluating different modelling approaches to inform decisions about whether or not to introduce RTS,S/AS01 as an additional malaria control measure in combination with other interventions [31]. Other modelling work has also impacted on the development of WHO guidelines on hand hygiene, and various screen and treat strategies for the prevention of cervical cancer, and control measures during the severe acute respiratory syndrome.

One explicit example in which the GRADE method was used to assess the quality of evidence from simulation studies in support of a WHO guideline recommendation was in the 2011 guidelines on the programmatic management of drug-resistant tuberculosis [32]. A conditional recommendation was made for the use of rapid drug susceptibility testing of isoniazid and rifampicin at the time of tuberculosis diagnosis, based on finding from a cohort simulation model. The quality of evidence was downgraded to very low due to indirectness, and additional caveats were highlighted in the accompanying commentary. The GRADE method has also been used to classify the quality of primary evidence used in population pharmacokinetic modelling to support recommendations relating to drug interactions.

Back to Top | Article Outline

The use of modelling in the 2013 WHO antiretroviral consolidated guidelines

Two key modelling projects were commissioned by the WHO HIV department and undertaken by the HIV modelling consortium ( www.hivmodelling.org) to support the 2013 ARV guidelines. The first used multiple independent models in four low-income and middle-income settings (South Africa, Zambia, India and Vietnam) representative of different HIV epidemic types (generalized, concentrated and mixed) and level of ART coverage, to examine health impact (DALYs), cost and cost–effectiveness of different eligibility criteria for ART initiation and testing strategies (assuming current as well as expanded patterns of HIV testing and linkage to care) in different populations (for example adults, pregnant women, HIV serodiscordant couples) [33]. A second project examined different strategies for monitoring treatment response (clinical, CD4+ cell count and viral load) in patients on ART, as well as different monitoring frequencies and different criteria for switching to second-line ART based on three independent models [34]. An additional project examined the relative benefits of providing earlier or immediate ART to HBV-HIV or HCV-HIV coinfected adults compared with HIV monoinfected adults [35]. A key feature of these analyses was their use of multiple existing independently developed models of HIV infection and transmission and impact of ART, to compare different scenarios, for which there were limited or no available data in the literature.

Additional modelling work was undertaken by two other groups to support evidence from systematic reviews in the development of recommendations on when and what ART to start in children: a causal modelling analysis of the impact of starting ART at different ages in children, based on data from the IeDEA – South African collaborative dataset [36]; and a Monte Carlo computer simulation model of paediatric HIV disease to examine the impact and cost–effectiveness of initiating different ARV regimens (protease inhibitor, NNRTI and induction/maintenance) in children (personal communication, Andrea Ciaranello 2013).

The primary contribution of the modelling outputs to the 2013 consolidated guidelines was in identifying optimal implementation strategies for the key clinical recommendations within the framework of the chapter Guidance for Programme Managers. For example, by highlighting the relative cost–effectiveness of different ARV eligibility criteria in combination with different strategies to expand testing and access, in different populations and epidemic settings, the models informed programmatic decisions as to whether expanding access to HIV care through testing and linkage to care should precede expansion of ART eligibility criteria.

A secondary role was consideration of the cost–effectiveness modelling data in the development of clinical recommendations (alongside evidence from systematic reviews of RCTs and observational studies, community values and preferences, equity and human rights implications, feasibility and constraints to implementation) using the GRADE decision-making framework.

The process involved a series of consultative meetings between WHO and representatives of the HIV modelling consortium to formulate the key questions, comparators and outcomes, and a key consortium meeting in November 2012, to discuss initial analyses and their interpretation. Results were presented to the WHO ARV clinical guidelines development groups (adults and maternal and child health) in December 2012, and the more detailed conclusions at the WHO Programmatic guidance meeting in January 2013.

As with a previous modelling consortia initiative [37], some limitations of the models used for the 2013 ARV guidelines process were recognized that will need to be addressed in future modelling endeavours. These include the need to consider the following: the impact of ART in the context of both other key preventive interventions such as male circumcision, and other competing priorities (e.g. investing instead in transitioning all patients from a more toxic stavudine ART regimen, or strengthening health systems to ensure a continuous ART drug supply at peripheral health centres); models relevant to more settings (e.g. high/low prevalence – high/low ART coverage, concentrated epidemics and different key populations); evaluation of impact of varying ‘real-world’ inputs, such as suboptimal adherence, levels of ART coverage, delays in getting patient results to clinicians and in switching ART, as well as fuller consideration of impact of different monitoring strategies on the potential for emergence of a drug-resistant virus, and HIV transmission; evaluation of different cost–effectiveness thresholds (rather than just comparing cost–effectiveness ratios to common GDP benchmarks as adopted by WHO); and the impact of decisions over short as well as long-term horizons.

Back to Top | Article Outline

Conclusions and next steps

Based on the experience of the HIV department in the 2013 ARV guidelines, we have identified several key considerations to inform future discussions on the appropriate use of modelling in the development of clinical or public health guidelines. These build on previous principles of ‘best practice’ for model planning, presentation and interpretation of mathematical models for policy making in HIV care developed by the Prevention Science and Mathematical Modelling Reference Group of the World Bank Global HIV/AIDS Programme [38]. These nine principles are as follows: clear rationale, scope and objectives; explicit model structure and key features; well defined and justified model parameters; alignment of model output with data; clear presentation of results, including uncertainty in estimates; exploration of model limitations; contextualization with other modelling studies; application of epidemiological modelling to health economic analyses; and clear language [38].

Back to Top | Article Outline
Transparency in the conduct and reporting of model inputs and results

Given the inherent complexity of modelling analyses, it is paramount that there is transparency and clarity in the reporting of all model attributes, including assumptions and data sources, to allow decision makers to make informed judgments concerning the value and robustness of the results generated by the models. External reviewers and end-users should be able to independently replicate results with publicly available information on data sources and methods.

Examples of guidance on criteria and requirements for reporting of modelling analyses have been published [15,27–29,38]. Below are listed some of the key questions relating to a model's transparency and its use that should be asked by end-users before interpreting its outputs. These include the purpose of the model; model structure (deterministic vs. stochastic or population-based vs. individual) and rationale for choice, with specification of the equation structure, and flow diagrams as appropriate to show disease natural history, and impact of interventions; the appropriateness of the model for different questions and in different settings, and how the model structure could have influenced the results; the complete list of variables and parameters, their estimates and ranges, sources of data (based on observation or extracted from previous modelling studies) and validation process; implicit and explicit assumptions contained in the model; description of uncertainty analyses, as well as formal sensitivity analyses on key variables (i.e. examining the importance of each model parameter in influencing the variability in the model outcomes); results of model assessment; and availability of model documentation. It is also important for end-users to be able to identify the key drivers of outcome and cost–effectiveness in the model.

Initial key questions for end-users before using results of an individual model are as follows:

  1. Is there adequate documentation of the model available for all who wish to study or use the model?
  2. What assumptions and data were used in producing model output for specific applications?
  3. Why is the selected model the most appropriate to use? (What is the stated purpose of the model selected? What does it measure or not measure? Are there other models equally suited? Is its intended use compatible with the present need?)
  4. Is the model developed specifically for the present purpose or has it been adapted?
  5. Have all appropriate costs been included?
  6. Has the model been evaluated by someone else other than the model authors?
  7. Has the model been validated over a full range of scenarios and settings? How well does it fit with observed data?
  8. How well does the structure of the model resemble the scenario being modelled?
  9. Is the model appropriately sensitive to the inputs being varied?

Back to Top | Article Outline
The need for agreed standards for critical appraisal and use of modelling data in healthcare policy-making

Since 2008, the GRADE methodology has been adopted by WHO to rate the quality of evidence used in support of its guideline recommendations. However, to date there has been no critical appraisal tool (other than guidelines for the development and review of health economic analyses [39]), or specific guidance on the use of the GRADE system to evaluate the quality of evidence for population-level effect estimates of interventions based on model simulations, to be used alongside other standardized criteria for reporting and assessing model structure, assumptions, robustness and limitations [15,27–29]. The GRADE method was used in one WHO guideline on drug-resistant tuberculosis to rank the quality of data from one simulation cohort study [32]. The evidence was ranked as very low quality based on a downgrading by one level of observational data (ranked as low quality in accordance with GRADE) for indirectness. Therefore, a direct application of this approach would rank most models based on data from observational studies (e.g. cohort simulations) as very low quality in GRADE. The availability of an adapted GRADE system or other critical appraisal tool for modelling analyses would enhance their quality, interpretability and usefulness in future guidelines development. There have also been some early efforts to present meta-analyses or summaries of modelling results [17], as well as approaches to seek to compare modelling results to quasi-experimental results.

Back to Top | Article Outline
Modelling of cost–effectiveness is only one component of decision-making in development of WHO recommendations and in priority setting

The primary evidence source to inform the development of clinical recommendations is, where available, RCTs together with systematic reviews of RCTs and observational data. In addition to cost and cost–effectiveness, several other factors are often considered in the final decision-making process to develop recommendations based on the GRADE methodology. Therefore, modelling data should be regarded as important but supplementary to the main evidence base. To ensure the judicious use of modelling data in guidelines, there is need for systematic guidance on how best to integrate modelling within a broad evidence review framework, using the GRADE or an alternative methodology. Similarly, though cost–effectiveness analyses can inform the identification of priorities for implementation, they should nevertheless be complemented by other considerations of local epidemiology, affordability, as well as impact on equity with different courses of action [36,40,41]

Back to Top | Article Outline
The need for closer interaction and an ongoing dialogue between modellers and model end-users or decision makers

Greater transparency as well as improved relevance and impact of the modelling output will be also be achieved by closer interactions between modellers and all potential end-users, including WHO and country programme managers, from the outset and throughout the process. The most important initial joint step is in framing a clear answerable policy question relevant to the decisions facing programme managers in country, including the objectives, comparisons, outcomes and timeframe, which will also be important in subsequent interpretation of the modelling results. Particularly critical is to determine which costs are considered relevant and should be included in the analysis [40], as well as the type of economic evaluation [i.e. quality-adjusted-life-years (QALYs) or disability-adjusted-life-years (DALYs)]. Wherever possible, all relevant costs should be assessed comprehensively from the perspective of both the health service provider and patient (including in the HIV context, costs of expanding testing, pre-ART care and other frontline services) and standardized estimates (for example, using the WHO CHOICE [41] framework) or existing country-specific data should be used.

Closer interaction with modellers will require policy makers to develop a working knowledge of modelling in order to understand the building blocks and limitations of a specific model, including its scope and structure, assumptions made and how they affect outcomes. This may well require formal training on model development and interpretation. At country level, there is also a need for access to simplified stakeholder-friendly modelling tools that allow further context-specific impact and cost analysis of changing ART recommendations in different target populations. Examples of tools available to assist countries in estimating costs and impact of HIV interventions include Spectrum [42], in particular the AIM (AIDS Impact Model) and Goals (Cost and Impact of HIV Interventions) modules, and OneHealth – a WHO software tool designed to comprehensively assess health investment needs in low- and middle-income countries [42]. Finally, close collaboration is required in both determining the policy implications of the results, what further empirical or modelling studies should be planned, and how to best communicate and translate model findings more widely.

Back to Top | Article Outline
WHO role in convening and facilitating comparative assessment of multiple models

The experience with the use of modelling in the 2013 ARV guidelines as well as from other WHO departments [24–27] suggests that the optimal WHO role is as facilitator-convenor, encouraging multiple groups to participate and contribute their different modelling designs, structures and assumptions for a comparative assessment using standardized scenarios, and datasets from different regions. As well as avoiding endorsement of a single model, the comparative modelling approach is emerging as a best practice model with several clear advantages.

First, it allows decision makers to draw on the estimates from multiple different models and cost–effectiveness analyses on the same policy question, and where there is agreement, it can strengthen the robustness of the conclusions, and provide ‘convergent validity’ of the findings. This is illustrated by the concurrence of all the models in their overall conclusion of the cost–effectiveness of expanding ART eligibility to all those with a CD4+ cell count 500 cells/μl or less relative to prespecified benchmarks over 20 years, despite variations in their model structures and assumptions [33]. However, although consensus across multiple models can be reassuring, a common error in assumptions or parameter estimate is still possible.

Second, it promotes greater transparency through access to and peer review of the sources of data and assumptions used for the various input parameters across multiple models. It, therefore, provides the opportunity to explain more systematically the observed variation among the models in relation to potential sources of disagreement in their assumptions (e.g. costs, levels of adherence, failure rates, mortality, loss to follow up as well as cost of switches to second line ART).

Third, it enables better characterization of the parameters driving uncertainty through sensitivity analyses across models, which may lead to iterative improvements in model specification. In turn, this can help identify evidence gaps and so focus efforts on the overall generation of better quality data to inform the most important model parameters (rather than on additional and often more complex models that suffer from the same data paucity).

Finally, such a forum offers the potential to build capacity of analysts from low-income and middle-income countries through collaborative adaptation of existing models to new settings. Clearly, in the governance of such modelling consortia, it is critical that they are as inclusive of as many independent models and groups as possible, and that a balance is struck between ensuring coherence in addressing the same question, while not being overly prescriptive.

Back to Top | Article Outline
The need to optimize research and data collection

In the development of global guidance on HIV prevention and treatment, modelling cannot substitute for actual clinical data and outcomes from well designed clinical trials and epidemiological studies. Access to high-quality, recent, and locally relevant programmatic, clinical and socio-behavioural data is critical to inform parameter estimates, ideally based on systematic reviews and data from large multiregional observational cohorts. In the HIV field, this includes, for example, rates of HIV testing and linkage to care, levels of ARV coverage, adherence, retention, viral suppression and stage of disease on accessing care, and where appropriate in subpopulations (by gender or age or vulnerable groups). Several large cluster randomized trials of the impact of earlier ART initiation on HIV incidence at a population level are ongoing, that will provide an important and direct test of the predictions from current modelling work. Although the majority of cost–effectiveness analyses are based on modelling, there is need for more practice-based costing and cost–effectiveness studies as part of planned or ongoing clinical trials and observational studies. Finally, there is a need for an ongoing two-way interaction between researchers and modellers both to identify key gaps in data collection, inform priority research areas for future clinical and epidemiological research, and highlight new modelling questions arising from recent research.

Back to Top | Article Outline

Acknowledgements

The authors thank Dr Maicon Falavigna, McMaster University, Raymond Hutubessy, Initiative for Vaccine Research, Immunization, Biologicals and Vaccines Department, World Health Organization, Geneva, Cathy Roth, Epidemic and pandemic Alerts and response, World Health Organization, Geneva, and Evelyn Whitlock, Kaiser Permanente Center for Health Research for helpful discussions and comments on the article.

Back to Top | Article Outline
Conflicts of interest

There are no conflicts of interest.

Back to Top | Article Outline

References

1. Antiretroviral therapy for HIV infection in adults and adolescents: recommendations for a public health approach. 2010 revision. Geneva: World Health Organization; 2010. http://whqlibdoc.who.int/publications/2010/9789241599764_eng.pdf

Cited 17 May 2013. http://whqlibdoc.who.int/publications/2010/9789241599764_eng.pdf


2. Antiretroviral therapy for HIV infection in infants and children: towards universal access. Recommendations for a public health approach. 2010 revision. Geneva: World Health Organization; 2010; http://whqlibdoc.who.int/publications/2010/9789241599801_eng.pdf

[Cited 17 May 2013]. http://whqlibdoc.who.int/publications/2010/9789241599801_eng.pdf


3. Antiretroviral drugs for treating pregnant women and preventing HIV infection in infants: towards universal access: recommendations for a public health approach. 2010 revision. Geneva: World Health Organization; 2010. http://whqlibdoc.who.int/publications/480 2010/9789241599818_eng.pdf

Cited 17 May 2013. Available from: http://whqlibdoc.who.int/publications/480 2010/9789241599818_eng.pdf


4. Cohen MS, Chen YQ, McCauley M, Gamble T, Hosseinipour MC, Kumarasamy N, et al. Prevention of HIV-1 infection with early antiretroviral therapy. N Engl J Med. 2011; 365:493–505.

5. Kitahata M, Gange SJ, Abraham AG, Merriman B, Saag MS, Justice AC, et al. Effect of early versus deferred antiretroviral therapy for HIV on survival. N Engl J Med. 2009; 360:1815–1826.

6. Sterne JA, May M, Costagliola D, de Wolf F, Phillips AN, Harris R, et al. When To Start Consortium Timing of initiation of antiretroviral therapy in AIDS-free HIV-1-infected patients: a collaborative analysis of 18 HIV cohort studies. Lancet. 2009; 373:1352–1363.

7. Guidance on couples HIV testing and counseling, including antiretroviral therapy for treatment and prevention in serodiscordant couples: Recommendations for a public health approach. Geneva: World Health Organization; 2012 .

8. Service delivery approaches to HIV testing and counseling (HTC): a strategic HTC policy framework. Geneva: World Health Organization; 2012 .

9. Hirnschall G, Harries AD, Easterbrook PJ, Doherty MC, Ball A. The next generation of the World Health Organization's global antiretroviral guidance. J Intern AIDS Soc. 2013; 16:18757

10. Consolidated Guidelines on the use of antiretroviral drugs for treating and preventing the infection. Geneva: World Health Organisation; 2013. www.who.int/hiv/pub/guidelines/arv2013

Available from: www.who.int/hiv/pub/guidelines/arv2013


11. Balshem H, Helfand M, Schünemann HJ, Oxman AD, Kunz R, Brozek J, et al. GRADE guidelines: 3. Rating the quality of evidence. J Clin Epidemiol. 2011; 64:401–406.

12. Guyatt GH, Oxman AD, Vist G, Kunz R, Brozek J, Alonso-Coello P, et al. GRADE guidelines: 4. Rating the quality of evidence – study limitations (risk of bias) and publication bias. J Clin Epidemiol. 2011; 64:407–415.

13. Guyatt GD, Oxman AD, Kunz R, Woodcock J, Brozek J, Helfand M, et al. GRADE guidelines: 8 Rating the quality of evidence – indirectness. J Clin Epidemiol. 2011; 64:1303–1310.

14. Guyatt GH, Oxman AD, Sultan S, Glasziou P, Akl EA, Alonso-Coello P, et al. The GRADE Working Group GRADE guidelines. 9. Rating up the quality of evidence. J Clin Epidemiol. 2011; 64:1311–1316.

15. Garnett GP, Cousens S, Hallett TB, Steketee R, Walker N. Mathematical models in the evaluation of health programmes. Lancet. 2011; 378:515–525.

16. Bertozzi SM, Bautista-Arredondo S. Modeling the impact of antiretroviral use in developing countries. PLoS Med. 2006; 3:e148

17. Baggaley RF, Ferguson NM, Garnett GP. The epidemiological impact of antiretroviral use predicted by mathematical models: a review. Emerg Themes Epidemiol. 2005; 2:1–18.

18. Sainfort F, Kuntz KM, Gregory S, Butler M, Taylor BC, Kulasingam S, et al. Adding decision models to systematic reviews: informing a framework for deciding when and how to do so. Value Health. 2013; 16:133–139.

19. Kuntz K, Sainfort F, Butler M, Taylor B, Kulasingam S, Gregory S, et al. Decision and Simulation Modeling in Systematic Reviews. Methods Research Report. (Prepared by the University of Minnesota Evidence-based Practice Center under Contract No. 290-2007-10064-I.) February 2013. AHRQ Publication No. 11(13)-EHC037-EF. Rockville, MD: Agency for Healthcare Research and Quality.

20. Myers E, Sanders GD, Ravi D, Matchar D, Havrilesky L, Samsa G, et al. Evaluating the Potential Use of Modeling and Value-of-Information Analysis for Future Research Prioritization Within the Evidence-based Practice Center Program. (Prepared by the Duke Evidence-based Practice Center under Contract No. 290-2007-10066-I.) June 2011. AHRQ Publication No. 11-EHC030-EF. Rockville, MD: Agency for Healthcare Research and Quality.

21. Senouci K, Blau J, Nyambat B, Coumba Faye P, Gautier L, Da Silva A, et al. The Supporting Independent Immunization and Vaccine Advisory Committees (SIVAC) Initiative: a country driven, multipartner programme to support evidence based decision making. Vaccine. 2010; 28:(Suppl 1):A26–A30.

22. Archimedes. 2012. HHS enlists Archimedes Inc. to expand governments use of healthcare modeling for forecasting quality and cost outcomes. http://archimedesmodel.com/

Press release, 3 May, San Francisco, California ( http://archimedesmodel.com/PR-3-May-2012)


23. Schlessinger L, Eddy DM. Archimedes: a new model for simulating health care systems–the mathematical formulation. J Biomed Inform. 2002; 35:37–50.

24. Chaiyakunapruk N, Somkrua R, Hutubessy R, Henao AM, Hombach J, Melegaro A, et al. Cost effectiveness of pediatric pneumococcal conjugate vaccines: a comparative assessment of decision-making tools. BMC Med. 2011; 9:53

25. Postma MJ, Jit M, Rozenbaum MH, Standaert B, Tu HA, Hutubessy RC, et al. Comparative review of three cost-effectiveness models for rotavirus vaccines in National Immunization Programs: a generic approach applied to various regions in the world. BMC Med. 2011; 9:84

26. Jit M, Demarteau N, Elbasha E, Ginsberg G, Kim J, Praditsitthikorn N, et al. Human papillomavirus vaccine introduction in low and middle income countries: guidance on the use of cost-effectiveness models. BMC Med. 2011; 9:54

27. Jit M, Levin C, Brisson M, Levin A, Resch S, Berkhof J, et al. Economic analyses to support decisions about HPV vaccination in low- and middle- income countries: a consensus report and guide for analysts. BMC Med. 2013; 11:23

28. A stakeholders panel to evaluate the impact of strengthening WHO's normative and policy setting functions for immunization, 2006–2010. Geneva: World Health Organization; 2009 .

29. Walker DG, Hutubessy R, Beutels P. WHO Guide for standardisation of economic evaluations of immunization programmes. Vaccine. 2010; 28:2356–2359.

30. Hutubessy R, Henao AM, Namgyal P, Moorthy V, Hombach J. Results from evaluations of models and cost-effectiveness tools to support introduction decisions for new vaccines need critical appraisal. BMC Med. 2011; 9:55

31. Moorthy VS, Hutubessy R, Newman RD, Hombach J. Decision-making on malaria vaccine introduction: the role of cost–effectiveness analyses. Bull World Health Organ. 2012; 90:864–866.

32. Guidelines for the programmatic management of drug-resistant, tuberculosis. Geneva: World Health Organization; 2011 .

33. Eaton JW, Menzies NA, Stover J, Cambiano V, Chindelevitch L, Cori A, et al. Health benefits, costs, and cost-effectiveness of earlier eligibility for adult antiretroviral therapy and expanded treatment coverage: a combined analysis of 12 mathematical models. Lancet Glob Health. 2014; 2:e23–e34.

34. Keebler D, Revill P, Braithwaite S, Phillips A, Blaser N, Borquez A, et al. Cost-effectiveness of different strategies to monitor adults on antiretroviral treatment: a combined analysis of three mathematical models. Lancet Glob Health. 2014; 2:e35–e43.

35. Martin NK, Devine A, Eaton JW, Miners A, Hallett T, Foster GR, et al. Modeling the impact of early antiretroviral treatment for adults coinfected with HIV and hepatitis B or C in South Africa. AIDS. 2013; 27:S281–S292.

36. The LeDEA Southern Africa Paediatric Collaboration. When to start ART in children aged 2-5 years? A Collaborative causal analysis of cohort studies from Southern Africa. IAS, July 2013;, Kuala Lumpur.

Abstract No: TUPE313


37. The HIV Modelling Consortium Treatment as Prevention Editorial Writing Group HIV treatment and prevention: models, data, and questions – towards evidence-based decision-making. PloS Med. 2012; 9:66–73.

38. Delva W, Wilson DP, Abu-Raddad L, Gorgens M, Wilson D, Hallett TB, Welte A. HIV treatment as prevention: principles of good HIV epidemiology modelling for public health decision-making in all modes of prevention and evaluation. PloS Med. 2012; 9:66–73.

39. Husereau D, Drummon M, Petrou S, Carswell C, Moher D, Greenberg D, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. BMC Med. 2013; 11:

40. Musgrove P. Public spending on healthcare: how are different criteria related?. Health Policy. 1999; 47:207–223.

41. Making choices in health: WHO guide to cost-effectiveness analysis. In: Edejer TTT, Baltussen R, Adam T, Hutubessy R, Acharya A, Evans DB, et al.Geneva: World Health Organization; 2003.

42. Futures Institute. Glastonbury, CT. http://www.futuresinstitute.org(accessed 11th December 2013)

http://www.futuresinstitute.org


Keywords

Assessment; Development and Evaluation; Grading of Recommendations; guidelines development; modelling; WHO

© 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins

Login

Search for Similar Articles
You may search for similar articles that contain these same keywords or you may modify the keyword list to augment your search.