In the book Ambient Findability, author Peter Morville2 inspires readers with the tenet that “what we find changes who we become.” From the positive vantage point, this can be an affirmation of the way the availability of information in our digital age has the potential to transform what we know and how we think. From a more neutral perspective, the veracity of the information, the way the information is structured, the literacy of the reader, and the accessibility of the material determine how the information is used, by whom, and the influence it ultimately has on a person, community, or society. However, there is also a more cautionary viewpoint: that the way information is presented has the potential to misrepresent and distort, which has important parallels in research. In some cases, if “what we find” in our search for information is a summary of evidence that misrepresents the rigor and strength of the component studies, it has the potential to change practice in a way that may be deleterious to our patients.
For the sake of allegory, we might imagine a world where rehabilitation science was simple and straightforward, where effective interventions had positive outcomes regardless of dose (frequency, duration repetitions, intensity, and etc. of the intervention), where outcomes in individuals with one type of neurologic condition were equally applicable for all other types of neurologic conditions, where results from small studies were always the same as results from large studies, and where statistical significance equated with clinical significance. Think of how much easier it would be to generate quality evidence, how many fewer studies would need to be done, and how straightforward it would be to translate evidence to practice if only research outcomes were that black and white. Unfortunately, complicated dose-response relationships, divergent outcomes among different clinical populations, and the low power and limited generalizability of small studies are but a few of the issues and hurdles we all have to contend with as we try to translate evidence into practice—the bottom line is that science is complicated.
Systematic reviews and meta-analyses, are intended to methodically extract, critically appraise, and synthesize the literature in a defined content area to make interpreting the evidence less complicated. However, these types of articles represent study designs and, as with any study, the results and the conclusions derived from them are only valuable if the study is well-designed and carefully executed. A commentary published online in MedPage Today,2 in fall 2018, lamented the elevation of the systematic review/meta-analysis to a station where it is considered the highest level of evidence.3 The consequence has been that, in many journals, the publication of systematic reviews/meta-analyses exceeds the publication of original research. The author of the commentary calls on journal editors to give systematic reviews/meta-analyses “the highest level of skepticism.” A count of systematic reviews/meta-analyses submitted to the Journal of Neurologic Physical Therapy between January 2017 through the writing of this editorial in May 2019 indicates there were 44 submissions of this article type, of which only 5 were published. Most of the 39 rejected submissions were declined on editorial review for one or more of the following common reasons: duplicated reviews that had previously been published, were based on so few studies that meaningful conclusions could not be expected, did not adequately assess strength of the evidence or risk of bias, or mixed studies with different interventions/different clinical populations that could not meaningfully be combined.
When considering the strength of any type of research summary, it is particularly important to be certain that the evidence derived from multiple sources is synthesized in a way that accurately reflects the strength (or lack thereof) of the data on which that evidence is founded. Meaningful results from a strong study may be diluted when combined in a meta-analysis with studies that are underpowered, underdosed, or simply poorly designed. The result is conclusions that negate the evidence and obscure the ability to make either causal inferences or inferences of association. Conversely, when a meta-analysis is built on weak trials with positive but questionable conclusions, cobbling the results together with insufficient underlying data builds false evidence and does a disservice to the advancement of theory and practice.
As savvy consumers of research, it is as important to be as vigilant for bias and misrepresentation in articles that represent summaries of evidence as it is when evaluating the value of an isolated study. Only by critically appraising the evidence in all its forms, can we avoid the pitfalls of fake news in science.
1. Morville P. Ambient Findability: What We Find Changes Who We Become. Sebastopol, CA: O'Reilly Media, Inc; 2005.
3. Sackett DL. Rules of evidence and clinical recommendations on the use of antithrombotic agents. Chest. 1989;95(2 suppl):2S–4S.