Is attributing hierarchy by typifying research into groups useful for the development of science and for the least devoid of any potential bias, or is it a way of unwitting introduction of bias? While the evolution of human life on this planet has largely been dictated by the willingness of humans to go down the blind alleys and search, ascribing a hierarchy to the type of search conducted has been a development not known to have influenced our existence and evolution. The meaning of science, which was much broader, all along our evolution, than what seems to exist now, has been limited to a distinct approach/s and successful outcomes/results, resulting in a much narrowed understanding.
Science has evolved from a meaning for a type of knowledge to a specialized word for the pursuit of such knowledge, thereby removing the focus from knowledge to the pursuits of knowledge. An interchange of terms between science and research seems to have accompanied this change in focus as research; a systematic search for useful information on a particular topic seems to be riding the way science is conducted and interpreted. The domains of information and the way it is collected may be diverse, and the sources offering such information may vary like information from experiences, individuals, and situations, but the interpretation and the way information is collected and communicated have become defined as per the hierarchy, of late. This hierarchical interpretation of scientific research may force a change in the way the problems are viewed, which extends far beyond the domain of science itself, by restricting the significance of the outcomes to the way science is approached, maybe an undoing.
Hierarchy of Evidence
Ranking is generally used as a method to help cluster information into groups and also to engage stakeholders in these defined ways of collecting and publishing data as a target. However, the question remains as to whether this ranking helps science or the presence of these rankings merely forces individual and collective biases into certain ways of collection or interpretation of data.
In medicine and health, a system to create a rank order for the information obtained from scientific research and then to cluster the information obtained into certain groups is capturing the imagination of a large majority of researchers for long now even though there is an arrived at agreement on strength of one such group in comparison with other large-scale epidemiological studies. Despite this arrived at agreement, as many as 80 different hierarchies have been proposed and many more are expected to come for assessment of the strength of medical evidence. The fundamental understanding behind this agreement is that the way the study is conducted (design of the study) and the endpoints measured (statistical or otherwise) affect the strength of the evidence, thereby influencing the ranking.
As a general consensus with regard to these hierarchies, systematic reviews and meta-analysis of completed, high-quality randomized controlled trials, particularly published by groups like the Cochrane, have been ranked higher than the evidence generated by others, for example, observational studies. These rankings have come to assume that at the top of the hierarchy is a design, most likely to be free from systemic bias. Despite this consensus, what these rankings seem to fail to address is the differentials in research base, from geography to the settings in which research is conducted, to the importance of the translational value of research conducted, to repetition of the systematic reviews/meta-analysis conducted, and to overall achievement of answers to the scientific query raised by the research itself.
The fact that hierarchy seems to grant systematic reviews and meta-analysis a halo which leaves little doubt for questioning is a cause for concern. The hierarchies little realize that these (systematic reviews and meta-analysis), if not absolutely free from bias, may in effect exaggerate the shortcomings of the original research studies. Over a period of time, these key reviewing tools have faced criticism, some coming even from their proponents. But beyond criticism, some aspects will need immediate attention.
A data analysis published revealed that between January 1, 1986, and December 4, 2015, 266,782 items were tagged as “systematic reviews” and 58,611 as “meta-analyses” on PubMed. Importantly, while publications between 1991 and 2014 listed as systematic review increased by 2,728% and listed as meta-analyses increased by 2,635%, the overall increase in publications was only 153% for all PubMed indexed items. The analyses also revealed that the topics addressed by meta-analyses were not free from being identified as overlapping or redundant and at the same time conducting on the same topic.
Staggering 185 meta-analyses of antidepressants for depression were published between 2007 and 2014. But more than the number what is important is that a large number of these have either been produced by the industry employees or by certain authors with identifiable industry ties. Some nations (China for example) have rapidly become the most significant producer of PubMed-indexed meta-analyses in the English language during the period of analysis reported. However, a large part of Chinese meta-analyses on genetic associations (63% of global production in 2014) can be misleading since these meta-analyses can put together information from an abandoned era of candidate genes. The idea of research findings being universally implementable takes a backseat as most of the reviews and analysis are restricted to the geographies from where the larger contribution of original studies has come, thereby not adding much to the already available information or its application.
Ideally, all meta-analyses should be conducted as part of a collaborative or consortia and followed up with joint analyses. Meta-analyses conducted by individuals need discouraging.
This article argues that a large number—possibly the large majority—of systematic reviews and meta-analyses produced to date are not useful although it does not rule out the usefulness of these.
Financial support and sponsorship
Conflicts of interest
There are no conflicts of interest.
1. Siegfried T Philosophical critique exposes flaws in medical evidence hierarchies Science News (2017-11-13) Available from:https://www.sciencenews.org/blog/context/critique-medical-evidencehierarchies
Last accessed on 2021 Dec 5
2. Hailemariam D, Lulseged S, Derbew M Generating evidence for practice: Junior scholars in the limelight Ethiop Med J 2020 58 Suppl 2 79–80
3. Loannidis JPA The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses Milbank Q 2016 94 485–514