Secondary Logo

Journal Logo

EDITORIAL

The hardest thing about learning is unlearning: why systematic review replication should be reconsidered

Jordan, Zoe1; Karunananthan, Sathya2

Author Information
doi: 10.11124/JBIES-20-00452
  • Free

When any real progress is made, we unlearn and learn anew what we thought we knew before.

– Henry David Thoreau

Ever since the publication of Peter Senge's The Fifth Discipline 30 years ago, organizations have sought to become “learning organizations” that continually transform themselves.1 This thinking has been extended more recently to include the concept of “learning health systems,” whereby “science, informatics, incentives, and culture are aligned for continuous improvement and innovation, with best practices seamlessly embedded in the delivery process and new knowledge captured as an integral by-product of the delivery experience.2(p.136)

In an era characterized by disruption, it is a goal that is more important than ever. Indeed, researchers (and research synthesists) should be archetypal “learners” as we attempt to make sense of and explain the world around us. We should be experts at it. However, it sometimes feels as though we succumb to the same learning traps as everyone else. Perhaps part of the problem is that we are focused on the wrong thing: the problem isn’t learning, it's unlearning.

Systematic reviews and meta-analyses have long been recognized as “indispensable components in the chain of scientific information and key tools for evidence-based medicine.”3(p.486) The production of reviews and meta-syntheses has grown exponentially across disciplines in recent years, and while some have argued that they may assist in reducing research waste,4 others have noted that there remains a significant amount of unintentional overlap in the conduct of syntheses that results in significant redundancy and wasted effort.5 In an era where research waste is an increasingly significant concern, the concept of repeating any work unnecessarily, whether in the realms of primary or secondary research, needs to be thought through. Organizations such as JBI, Cochrane, and the Campbell Collaboration have well-established policies and protocols to avoid unnecessary duplication of systematic reviews. However, it is important to understand that planned replication does not equate to waste. Replication of systematic reviews has historically been hampered by a lack of understanding around both the purpose and process. Opportunities do exist for us to maximize the potential of systematic review replication to add real value to research, policy, and practice while still limiting waste from unnecessary duplication.

Tugwell et al.6 have explored when the decision to replicate a systematic review of interventions should be made. This work does not negate the role of systematic review replication, but rather seeks to generate more meaningful understanding of when this should occur. Indeed, with reference to the current COVID-19 pandemic, the potential value of systematic review replication has been identified as a means to more meaningfully coordinate our collective response to such global emergencies.7 The curation of available knowledge may take the form of replicating reviews of relevance in previous pandemics or reviews previously conducted across different contexts, as well as encouraging collaboration among teams concurrently working on the same question to avoid redundancies and waste.7

The work conducted by Tugwell et al.6 involved the generation of a checklist, which serves as an “explicit prompt to carefully consider the value of replication alongside other options such as updating, de novo reviews, and overviews of reviews.”(p.6) Given the scarcity of guidance currently available to inform such decisions, this work is both important and timely.

The four-item checklist includes questions related to the assessment of research priorities; the likelihood that replication will address uncertainties, controversies, or the need for additional evidence; benefit or harm resulting from implementation of the intervention in question; and opportunity costs of conducting the systematic review replication. It is designed to be utilized in conjunction with other available tools (such as AMSTAR-2 and systematic review priority-setting tools). This has significant implications for research translation more broadly and the ability of decision-makers to have more confidence in the results of reviews to inform policy and practice.

The value of automation and machine learning might also become more obvious with this development in thinking around replication, with the potential to reduce human errors in searching and data extraction. Seemingly, the ability to leverage machine learning to replicate a systematic review can reduce the time frame from three to six months to only six days, as well as reduce the cost, making it a more realistic and feasible endeavor.8 Equally, the concept of living reviews is coming to the fore as we live through the experience of COVID-19 and appreciate the benefits of having access to reviews that are continually updated. Replication has, potentially, a role to play here, too, with respect to reducing the uncertainty of living review findings (through, for example, the identification of errors or the broadening of the research question) being published.7

At this juncture in the evolution of the evidence-based health care movement, it is timely and important to consider whether we are still operating with mental models that best serve the needs of the broader evidence-based health care community. To embrace the new logic of value creation with regard to the synthesis of the best available evidence, we have to unlearn the old one. Organizations such as JBI, Cochrane, and the Campbell Collaboration are well positioned to support authors seeking to determine whether systematic review replication is warranted. Inclusion of checklists, such as the one proposed by Tugwell et al., in systematic review guidance provided by leading authorities may well be a key strategy for reaching stakeholders undertaking evidence synthesis work.

Unlearning should be seen as an opportunity to advance our collective wisdom and discern an alternative paradigm that is more fit for purpose. When we learn, we add new skills or knowledge to what we already know. When we unlearn, we step outside our traditional mental models to choose a different one. It seems that this is what Tugwell et al. are encouraging evidence synthesists to do. Unlearning the traditional replication paradigm (which is often influenced by multilayered policy and research funding bureaucracies or simple lack of understanding) in favor of one that is transparent and reduces “meta-waste” in “meta-analysis” to achieve “meta-value” is surely a great benefit to the synthesis community and those it seeks to serve.

References

1. Senge PM. The fifth discipline: the art and practice of the learning organization. New York: Doubleday/Currency; 1990.
2. Institute of Medicine. Best care at lower cost: the pathway to continuously learning health care in America. Washington, DC: The National Academies Press; 2013.
3. Ioannidis JP. The mass production of redundant, misleading, and conflicted systematic reviews and meta-analyses. Milbank Q 2016; 94 (3):485–514.
4. Johnson B, Adewumai T, Sims M, Vassar M. Systematic reviews in the prevention of research waste in emergency medicine randomized controlled trials. SHAREOK repository. 2019; [cited 2020 July 16]. Available from: https://shareok.org/handle/11244/323881.
5. Konstantinos CS, Ioannidis JPA. Replication, duplication and waste in a quarter million systematic reviews and meta-analyses. Circ Cardiovasc Qual Outcomes 2018; 11 (12):1–3.
6. Tugwell P, Welch VA, Karunananthan S, Maxwell LJ, Akl EA, Avey MT, et al. When and when not to replicate systematic reviews of interventions: consensus checklist. BMJ 2020; 370:m2864.
7. Page MJ, Welch VA, Haddaway NR, Karunananthan S, Maxwell LJ, Tugwell P. “One more time”: why replicating some syntheses of evidence relevant to COVID-19 makes sense. J Clin Epidemiol 2020; 125:179–182.
8. Michelson M, Ross M, Minton S. AI2 leveraging machine-assistance to replicate a systematic review. Value Health 2019; 22: (Suppl 2): s34.
© 2020 JBI