Secondary Logo

Journal Logo

Real Evidence

Ernst, Edzard, MD, PhD

Progress in Preventive Medicine: June 2017 - Volume 2 - Issue 3 - p e0005
doi: 10.1097/pp9.0000000000000005
Editorials
Open

University of Exeter, United Kingdom.

Published online 16 June 2017

Disclosure: The authors have no financial interest to declare in relation to the content of this article.

Address reprint requests to Edzard Ernst, University of Exeter, United Kingdom; E-mail: e.ernst@exeter.ac.uk

This is an open-access article distributed under the terms of the Creative Commons Attribution-Non Commercial-No Derivatives License 4.0 (CCBY-NC-ND), where it is permissible to download and share the work provided it is properly cited. The work cannot be changed in any way or used commercially without permission from the journal.

Real evidence? What is real evidence? Is there such a thing as unreal evidence? Normally not, but we live in unusual times where “alternative facts” and “post-truths” have become terms to be reckoned with. So, unreal evidence is perhaps more real than we think? The truth is that, having spent the last 25 years researching alternative medicine, I know quite a bit about unreal evidence; it has dominated this field long before the Trump administration even existed.

Evidence is defined as the body of facts that leads to a given conclusion. In medicine, the clinical outcome after administering a treatment depends not just on the effects of the intervention but on a multitude of factors: the natural history of the disease, the regression toward the mean, and the placebo effects, to mention just the 3 most obvious factors. Even ineffective therapies can therefore be followed by positive outcomes, and nothing is easier than to fool yourself into believing that an ineffective therapy is effective.

Consequently, the evidence for or against the effectiveness of a therapy cannot be based on experience but must rely on clinical trials and systematic reviews of clinical trials. Real evidence, therefore, must be evidence from robust clinical trials.

Back to Top | Article Outline

But what is robust?

In 1735, James Lind conducted what probably was the first controlled clinical trials in the history of medicine. He wanted to test the effectiveness of various remedies used at the time for preventing scurvy. His trial had 6 groups of 2 (!) sailors each; 1 received cider, 1 “elixir of vitriol,” 1 vinegar, 1 sea water, 1 a herbal mixture, and 1 lemons and oranges. After only 6 days of treatment, it became clear that only the group receiving lemons and oranges were protected from scurvy. Lind’s study would not be categorized as robust by today’s standards; nevertheless, it changed the history of medicine as well as the fortunes of the British empire. Despite its many flaws, his trial provided real evidence.

Robust is thus a relative term, and a study that was ground-breaking in the 18th century would be unacceptable today. Some health effects are so obvious that we do not even need a clinical trial. It has been wryly pointed out that we do not require an randomised clinical trial for determining that survival rates of sky-divers are better with than without parachutes. Yet, most research questions in medicine relate to much more subtle effects where we need large samples and must meticulously eliminate the influence of bias and confounding on the outcomes.

Bias is the term used to describe a systematic deviation from the truth. In clinical trials, bias often generates false-positive results; a therapy comes out as effective despite being useless. The most important types of bias in this context are “publication bias” and “selection bias.” The former describes the tendency that positive results get published, while negative findings remain unpublished, a phenomenon that will inevitably generate too positive an overall picture when conducting systematic reviews. Selection bias is an inherent limitation of clinical trials where the allocation of patients to the experimental treatment and the control is by choice of the patient or the physician. The consequence can be that their expectations interfere with the outcome. Selection bias is best eliminated through randomized allocation to treatment groups.

Confounding is a term used in research to describe factors influencing the result of an experiment or observation other than those factors under investigation. During the early days of homeopathy, for instance, there were several epidemiological studies that apparently showed that homeopathic treatment was effective in preventing death from infections like cholera or typhus. Such “evidence” gave a significant boost to homeopathy and goes a long way in explaining how it became (and still is) popular in many parts of the world. Today we know that the advantage of homeopathy over conventional medicine was due to confounding: conventional medicine of the time was not just useless but outright harmful, hygienic conditions in homeopathic hospitals were better than in conventional institutions, patients opting for homeopathy were better nourished and generally less ill than those treated with conventional medicine.

What follows is that evidence has to be true, if not it does not deserve the name evidence and certainly not the one of real evidence. Lind discovered real evidence even with a trial that was methodologically questionable. Almost 300 years later, seemingly robust studies often produce results that are not real.

In conclusion, the best ways to make sure that medical evidence is real, I think, is to conduct trials in such a way that the likelihood of false results is minimized, to insist on independent replications of their findings, and to consider the totality of the available data on any given topic.

Copyright © 2017 The Authors. Published by Wolters Kluwer Health Inc., on behalf of the European Society of Preventive Medicine.