Secondary Logo

Journal Logo

Pitfalls of clinical practice guidelines in the era of broken science

Let's raise the standards

Afshari, Arash; De Hert, Stefan

European Journal of Anaesthesiology (EJA): December 2018 - Volume 35 - Issue 12 - p 903–906
doi: 10.1097/EJA.0000000000000892

From the Department of Paediatric and Obstetric Anaesthesia, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark (AA) and Department of Anaesthesiology and Perioperative Medicine, Ghent University Hospital, Ghent University, Ghent, Belgium (SDH)

Correspondence to Arash Afshari, MD, PhD, Department of Paediatric and Obstetric Anaesthesia, Rigshospitalet, University of Copenhagen, Copenhagen, Denmark E-mail:

Clinical practice guidelines have become an instrumental tool to guide clinical practice. Their popularity is mainly driven by physicians’ desire to have easy access to recommendations that may facilitate their daily work. But equally, the public and legislators have a certain interest in guidelines to achieve the best possible care because of mounting medical costs, availability of a plethora of newly developed expensive devices and technologies, a transformation of healthcare delivery, a changing demographics with rising number of elderly in need of treatment and finally a substantial variation in the quality of care.

Our daily reality is one of clinicians being constantly overwhelmed by the quantity of reported evidence due to a mounting number of publications which makes guidelines more popular due to our need for aggregated and summarised evidence to guide our clinical decision-making. Theoretically, adherence to clinical guidelines should reduce variation in care and unnecessary cost while increasing quality and improve cost value measures.1

The American Institute of Medicines (IOM) has defined five major purposes for clinical practice guidelines: first, assisting patients and practitioners in clinical decision-making; second, education of individuals or groups; third, assessing and ensuring the quality of care; fourth, allocation of healthcare resources; and fifth, reducing the risk of legal liability of negligent care.1

The number of guidelines globally is increasing dramatically. For instance, the Guidelines International Network ( which was founded as a global network in 2002 currently comprises 105 organisations representing more than 53 countries from all continents. This network, which supports evidence-based healthcare and improved health outcomes, now comprises more than 6400 guidelines in its International Guideline Library.

The US National Guideline Clearinghouse ( and the US Trip Medical Database ( cover between 1500 and 3000 healthcare-related guidelines, whereas the National Institute for Health and Care Excellence ( currently holds more than 1700 guidelines. However, beyond these major players and large scientific organisations, it often appears as if different guideline groups are able to review the same clinical condition and reach conflicting conclusions.2 This mass-production will naturally erode the validity of these guidelines and reduce trust by healthcare providers and administrators.3,4

But how did we get here? Historically, much of the need for guidelines is derived from the recognition of widespread practice variation and exploding healthcare costs dating back to 1970s and 1980s in the United States. This combined with real concerns among physicians and institutions for litigation in case of adverse outcomes led to promotion of guidelines.5

Although clinical guidelines may appear beneficial for our patients as a group, they may not always be appropriate for the individual patient and certainly not for patients with various comorbidities. The increasing prevalence of patients with multiple chronic conditions with rising complexity raises concerns about the appropriateness and applicability of clinical practice guidelines for patient management as most guidelines currently are designed with a single chronic condition in mind and are not adapted to comorbidity-related issues.6

In addition, the routine approach of guidelines is often in conflict with the concept of personalised medicine and shared decision-making.1

Methodologically rigorous guidelines are massively resource-demanding and time-consuming which may ultimately delay dissemination of urgent guidance even if they are considered methodologically solid.

What is the cause for concern is the degree of trustworthiness of guidelines. Do they improve outcomes and maximise resource use? How do we decide if a clinical practice guideline is trustworthy and how do we raise standards? How do we act when clinical guidelines provide conflicting recommendations based on the same evidence?

IOM has proposed a set of standards for trustworthiness of guidelines that are in line with the policies of the European Society of Anaesthesiology (ESA).7–10 In general they consist of eight elements:

  1. A transparent process of guideline development and funding;
  2. Declaration of conflicts of interest where ideally none or at most only a small minority should have conflicts and the chair and cochair should not have any;
  3. The composition of guideline development groups should include active participation of methodology experts (as of now mandatory for ESA guidelines), clinicians and ideally also representatives of stakeholders and affected populations;
  4. Systematic synthesis of evidence through systematic reviews of the literature in accordance with existing standards such as Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA)11 and recommendations by Cochrane Collaboration;12
  5. Grading of evidence and providing strength of recommendation with explanations of the rationale behind each recommendation, summary of evidence for benefits and harms using GRADE Methodology (;
  6. Providing easy to use recommendations, flow-charts and other means that can facilitate adherence and disseminate the message;
  7. External review by various stakeholders and the public; and
  8. Update of guidelines with new emerging evidence.

It is important to emphasise that despite an apparent legal importance of guidelines in many settings for litigation purposes, guidelines are not meant to devise reimbursement policies, promote performance measures, act as legal precedents or to be used as measures of certification or licensing or public reporting.13 They are meant to promote care!

However, despite the best of intentions to raise standards of guidelines, the recommendations of even high-quality guidelines may be flawed. As described above, systematic reviews are a cornerstone of guidelines as they provide a summary of evidence. However, there is an alarming methodological shortage in many of the published systematic reviews that increases the risk of false conclusions and recommendations.14,15

Thus, poorly conducted systematic reviews may cause a great systemic flaw and bias. Academic and scientific organisations and public health providers may formulate guidelines that may misinform and misguide clinicians, exhaust taxpayer's money and provide little improvement of health and may even put patients at danger.

Yet, there seems to be an apparent ignorance or perhaps indifference among reviewers and editors of scientific journals to methodological issues and shortcomings of systematic reviews in which conclusions are taken at face value.16

As an example, despite ongoing efforts to raise standards even among high-impact medical journals, Cochrane systematic reviews are consistently rated to have higher methodological quality, but remain cited less frequently than non-Cochrane systematic reviews of lower methodological quality.17

With an annual 3% increase in number of systematic reviews published in PubMed exceeding the number of published randomised clinical trials,17 we are witnessing nothing less than a tremendous rise in production of misleading, often unnecessary, flawed and high risk of bias systematic reviews and meta-analyses.

This is despite efforts of many grass-root organisations such as the Cochrane collaboration to increase the methodological quality of systematic reviews by introduction of tools such as PRISMA guidelines,18 PRISMA-protocol guide19 and the Methodological Expectations of Cochrane Intervention Review.20 But despite all these efforts the majority of current research effort seems wasted because of failed priorities and methodological and design shortcoming of trials and systematic reviews.

And let's face it. Even despite best intentions and highest methodological standards and rigour, systematic reviews and guidelines are no better than the trials and publications they include. How do we, for instance, address the fact that industry sponsorship significantly affects research outcome? In a recent Cochrane systematic review, industry-sponsored studies were found to have 27% more favourable efficacy results [relative risk 1.27; 95% confidence interval (CI) 1.17 to 1.37].21

In another recent publication, financial ties of principal investigators of clinical trials were independently and strongly associated with positive clinical trial results (odds ratio 3.23, 95% CI 1.7 to 6.1). Thus, the findings of systematic reviews including these biased industry-sponsored trials, will naturally also affect clinical practice guideline recommendations if their methodology and conclusions are not scrutinised by guideline groups.22

Another major challenge in regard to publication of these poor quality systematic reviews is the issue of conflict of interests and lack of methodological knowledge of peer reviewers. We are often faced with little spare time, and the journals often demand quick assessment of articles with unrealistic time limits and provide little incentives for the peer-reviewers. Furthermore, there are no formal requirements for the peer-reviewers to have any statistical or methodological knowledge about systematic reviews, meta-analyses, GRADE or guideline-methodological requirements.

Adding to the above is the issue of scientific fraud and detection of fraudulent or manipulated trials that are included in systematic reviews and clinical practice guidelines. The Committee on Publication Ethics has since 1997 formulated standards and guidance for editors for the detection and retraction of fraudulent research but the problems with scientific fraud remains far from resolved and certainly when faced with an explosion in the number of predatory fake online journals, one should be even more alert.23

As the academic and financial incentives of scientific publications for researchers and research institutions remain high, eradication of scientific fraud remains unrealistic unless we embark on a path of international collaboration trying to raise standards and moving beyond the notion of ‘publish or perish’. We need to rethink the entire publication and academic system. We have to move beyond our obsession with statistical significance and P values. We should acknowledge and address the issue of reproducibility of trial findings and data sharing. By some estimates, less than 50% of findings from basic science research from academic institutions can be reproduced in an industrial setting.24 These very same publications lay the foundation for design of new drugs and clinical trials that often fail or show little effect or may even harm the very same patients we want to help. Not to mention the misuse of resources.

Moving up the latter, the quality of a large portion of the published clinical trials remains poor. We are witness to an increasing number of underpowered small clinical trials. By just merely searching on PubMed and using the keyword ‘pilot studies’, it becomes evident that during the last 30 years, there has been an astonishing rise in the number of pilot studies.

As the medical literature is being drowned by these biased, poor quality, inadequately designed and underpowered trials, often ‘disguised’ as pilot studies, the systematic reviews and clinical guidelines which are based on the very same trials will naturally be biased as well. As recently reported in a BMJ publication, the median number of trials in Cochrane reviews is between six and 16 with a median number of patients per trial of about 80. Thus, most meta-analyses despite methodological rigour will suffer from sparse data and remain underpowered with a high risk of random error resulting in false, often positive, conclusions.25

The impact of inadequate methodological standards in research has not only direct impact for patient care but poses a great financial burden for our societies. Some recent estimates indicate that as much as 85% of global health research is wasted because of inadequate methodology and poor study design and conduct.26

Not only is this figure breath-taking, but given the fact that 200 billion US dollars annually is spent on health and medical research, this waste of estimated 170 billion US dollars globally is shocking.

But how should we address the methodological deficit in science? One of the main expert opinion makers, Professor John Ioannidis, has recently proposed a list of research practices which could help increase the proportion of true research findings. They include requirement for large-scale collaborative research; adoption of replication culture and reproducibility practices; registration of all steps of trial conduct (protocol, analyses, data set and results); sharing of data, containment of conflict of interest for sponsors and authors; higher statistical and methodological standards; improvement of study design standards, peer-review, reporting and dissemination of research; and finally better training of scientists in methods and statistics such as editors, peer-reviewers, trial investigators and authors.27

Clinical guidelines often have immense impact when issued by major scientific organisations as they act as a standard of care which may be used to devise national and local protocols, measure physician performance, evaluate adherence to standards, to carry out legal and insurance coverage decisions and decide on the choice of drugs to be purchased and used.28 Moreover, they can also be used as ‘expert testimony’ in cases of litigation and malpractice.

Furthermore, we should in no way overlook the impact of conflicts of interest as despite various attempts to limit or reduce the conflicts of interests among guideline authors and sponsors, most guideline authors have conflicts of interests that may reduce the reliability of guideline recommendations.7,29,30

Neither should we turn our blind eyes towards the impact of industry influence on clinical guidelines.31

As a scientific society, much of our credibility depends on the very same guidelines that we draft and promote and the founding research that we carry out at our institutions. We have an urgent obligation to raise the methodological standards of science in general. Our goal should be to increase the methodological standards of our guidelines in accordance to IOM set of standards for trustworthiness. Because the sad reality is that only a fraction of the published clinical guidelines live up to these standards.31–33 Moreover, for the majority of guideline authors, the magnitude of their industry tie would disqualify them from authorship if measured against IOM standard.34–36 We may never be truly able to eliminate the issue of conflicts of interest or eliminate scientific fraud, but as a scientific society, the ESA aims to improve standards and we recognise that it is time to address the issue of broken science.37

Back to Top | Article Outline

Acknowledgements relating to this article

Assistance with the Editorial: none.

Financial support and sponsorship: none.

Conflicts of interest: none.

Comment from the Editor: this Editorial was checked and accepted by the Editors, but was not sent for external peer-review.

Back to Top | Article Outline


1. Graham R, Mancher M, Miller Wolman D, et al. Clinical practice guidelines we can trust, Institute of Medicine Committee on standards for developing trustworthy clinical practice guidelines. Washington, DC, USA: National Academies Press; 2011.
2. McAlister FA, Van Diepen S, Padwal RS, et al. How evidence-based are the recommendations in evidence-based guidelines? PLoS Med 2007; 4:e250.
3. Sniderman AD, Furberg CD. Why guideline-making requires reform. JAMA 2009; 301:429–431.
4. Shaneyfelt TM, Centor RM. Reassessment of clinical practice guidelines: go gently into that good night. JAMA 2009; 301:868–869.
5. Oza VM, El-Dika S, Adams MA. Reaching safe harbor: legal implications of clinical practice guidelines. Clin Gastroenterol Hepatol 2016; 14:172–174.
6. Goodman RA, Boyd C, Tinetti ME, et al. IOM and DDHS meeting on making clinical practice guidelines appropriate for patients with multiple chronic conditions. Ann Fam Med 2014; 12:256–259.
7. Ransohoff DF, Pignone M, Sox HC. How to decide whether a clinical practice guideline is trustworthy. JAMA 2013; 309:139–140.
8. De Hert S, Staender S, Fritsch G, et al. Preoperative evaluation of adults undergoing elective noncardiac surgery: updated guideline from the European Society of Anaesthesiology. Eur J Anaesthesiol 2018; 35:407–465.
9. Samama CM, Afshari A. ESA VTE Guidelines Task Force. European guidelines on perioperative venous thromboembolism prophylaxis. Eur J Anaesthesiol 2018; 35:73–76.
10. Kozek-Langenecker SA, Ahmed AB, Afshari A, et al. Management of severe perioperative bleeding: guidelines from the European Society of Anaesthesiology: first update 2016. Eur J Anaesthesiol 2017; 34:332–395.
11. Moher D, Liberati A, Tetzlaff J, et al. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA statement. BMJ 2009; 339:b2535.
12. The Cochrane Collaboration; 2011; Higgins JPT, Green S. Cochrane handbook for systematic reviews of interventions version 5.1.0.
13. Rosenfeld RM, Shiffman RN, Robertson P. Clinical practice guideline development manual, third edition: a quality-driven approach for translating evidence into action. Otolaryngol Head Neck Surg 2013; 148:S1–S55.
14. Landoni G, Comis M, Conte M, et al. Mortality in multicenter critical care trials: an analysis of interventions with a significant effect. Crit Care Med 2015; 43:1559–1568.
15. Ioannidis JP. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q 2016; 94:485–514.
16. Ioannidis JP. Meta-analyses can be credible and useful: a new standard. JAMA Psychiatry 2017; 74:311–312.
17. Goldkuhle M, Narayan VM, Weigl A, et al. A systematic assessment of Cochrane reviews and systematic reviews published in high-impact medical journals related to cancer. BMJ Open 2018; 8:e020869.
18. Moher D, Liberati A, Tetzlaff J, et al. Preferred Reporting Items for Systematic Reviews and Meta-Analyses: the PRISMA statement. PLoS Med 2009; 6:e1000097.
19. Moher D, Shamseer L, Clarke M, et al. Preferred Reporting Items for Systematic Review and Meta-Analysis protocols (PRISMA-P) 2015 statement. Syst Rev 2015; 4:1.
20. Higgins J, Lasserson T, Chandler J, et al. Standards for the conduct and reporting of new Cochrane Intervention Reviews, reporting of protocols and the planning, conduct and reporting of updates. Methodological expectations of Cochrane intervention reviews (MECIR) version 1.05. 2018;
21. Lundh A, Lexchin J, Mintzes B, et al. Industry sponsorship and research outcome. Cochrane Database Syst Rev 2017; 2:MR000033.
22. Ahn R, Woodbridge A, Abraham A, et al. Financial ties of principal investigators and randomized controlled trial outcomes: cross sectional study. BMJ 2017; 356:i6770.
23. [No authors listed]. A consensus statement on research misconduct in the UK. BMJ 2012; 344:e1111.
24. Niven DJ, McCormick TJ, Straus SE, et al. Reproducibility of clinical research in critical care: a scoping review. BMC Med 2018; 16:26.
25. Roberts I, Ker K, Edwards P, et al. The knowledge system underpinning healthcare is not fit for purpose and must change. BMJ 2015; 350:h2463.
26. Chalmers I, Glasziou P. Avoidable waste in the production and reporting of research evidence. Lancet 2009; 374:86–89.
27. Ioannidis JP. How to make more published research true. PLoS Med 2014; 11:e1001747.
28. Lenzer J. Why we can’t trust clinical guidelines. BMJ 2013; 346:f3830.
29. Gale EA. Conflicts of interest in guideline panel members. BMJ 2011; 343:d5728.
30. Guyatt G, Akl EA, Hirsh J, et al. The vexing problem of guidelines and conflict of interest: a potential solution. Ann Intern Med 2010; 152:738–741.
31. Kung J, Miller RR, Mackowiak PA. Failure of clinical practice guidelines to meet institute of medicine standards: two more decades of little, if any, progress. Arch Intern Med 2012; 172:1628–1633.
32. Ruan X, Ma L, Vo N, et al. Clinical practice guidelines: the more, the better? North Am J Med Sci 2015; 8:77–80.
33. Reames BN, Krell RW, Ponto SN, et al. Critical evaluation of oncology clinical practice guidelines. J Clin Oncol 2013; 31:2563–2568.
34. Norris SL, Holmer HK, Ogden LA, et al. Conflict of interest disclosures for clinical practice guidelines in the national guideline clearinghouse. PLoS One 2012; 7:e47343.
35. Carlisle A, Bowers A, Wayant C, et al. Financial conflicts of interest among authors of urology clinical practice guidelines. Eur Urol 2018; 74:348–354.
36. Checketts JX, Sims MT, Vassar M. Evaluating industry payments among dermatology clinical practice guidelines authors. JAMA Dermatol 2017; 153:1229–1235.
37. De Robertis E, Longrois D. To streamline the guideline challenge: the European Society of Anaesthesiology policy on guidelines development. Eur J Anaesthesiol 2016; 33:794–799.
© 2018 European Society of Anaesthesiology