Evidence in reproductive medicine : Reproductive and Developmental Medicine

Secondary Logo

Journal Logo

Editorial

Evidence in reproductive medicine

Revelli, Alberto1,*; Ruffa, Alessandro2; Gennarelli, Gianluca2

Author Information
Reproductive and Developmental Medicine: September 2022 - Volume 6 - Issue 3 - p 129-130
doi: 10.1097/RD9.0000000000000028
  • Open
F1

Prof. Alberto Revelli has a Degree cum laude in Medicine (1984), with a specialization in Obstetrics and Gynecology (1988), and a PhD in Obstetrical and Gynecological Sciences (1996) from the University of Torino, Italy. From 1999 to 2005, he was a researcher in Obstetrics and Gynecology, from 2006 to 2015 he was an Aggregate Professor, and from 2015 to present, he is an Associate Professor in Obstetrics and Gynecology at S. Anna Hospital, University of Torino, Italy. From 2021, he is the Director of the II Chair in Obstetrics and Gynecology of the University of Torino, School of Medicine. Since 1999, he has been a lecturer of Reproductive Biotechnology and IVF at the Degree School of Medical Biotechnology and at the Degree School of Obstetrics, at the Specialization Schools in Gynecology and Obstetrics, Endocrinology, Medical Genetics, in the PhD course of Clinical Sciences, and in the II Level Master course in Physiopathology of Reproduction and ART, all at the University of Torino, Italy. Presently, he is the Director of the II Level Master course in Maternal and Fetal Medicine, IVF consultant at Generalife-LIVET Clinic for Assisted Reproduction in Torino, Italy, and a member of the Group of Fertility Experts of the Italian Ministry of Health. He has authored 144 articles in peer-reviewed journals (available through PubMed), 1 book, 41 book chapters and several articles on proceedings, with an extended-H index of 32.

The aim of every practitioner working in the field of Reproductive Medicine is to offer the best available diagnostic and therapeutic options to patients, to fulfill their desire for a healthy baby. Avoiding unnecessary, ineffective, or potentially harmful procedures should be considered equally relevant. Providing evidence for effectiveness, safety, and costs of medical interventions should be the main task of clinical researchers. However, investigators are focused mainly on effectiveness, not always considering with adequate efforts safety issues and average costs[1].

It is well known that in Reproductive Medicine, novel interventions are frequently introduced into clinical practice before accurate validation[1,2]. In several cases, assisted reproductive technologies (ARTs) claimed to improve success and have never been adequately assessed for any plausible and clinically relevant utility[2,3]. There is still a lack of robust evidence for many adjuvant therapies currently used, which tend to rely on an empirical basis. Past experience has shown how in several cases those therapies/techniques proved useless when rigorously tested[4,5]. Obviously enough, when deciding whether to use a medical intervention, the personal experience of the clinician is somehow relevant, but has limited reliability. For this reason, the experience-based medicine has been gradually replaced or integrated by the evidence-based medicine (EBM).

EBM arises from the proper design of scientific studies in which appropriate statistics are applied to clinical database, in order to draw conclusions. There exists a hierarchy of major clinical study designs. The observational studies have the lowest validity and include case reports, case series, case-control studies, cross-sectional studies, and prospective cohort studies. Intervention studies, that is, randomized controlled trials (RCTs), lay on the top of the pyramid of the evidence. Nowadays, the most reliable sources of scientific evidence are considered RCTs, systematic reviews and meta-analysis.

RCTs are few and often of limited size, due to the complexity and the costs. RCTs need a preliminary power calculation to estimate how many observations should be included in the trial to get enough statistical power to detect relevant differences in the primary endpoints between two or more interventions. Statistical power increases with sample size. Given the small signal-to-noise ratios of many procedures in reproductive medicine, only large-scale, multicenter, often international RCTs seems to provide sufficient evidence, when the adoption of new clinical procedures is at stake[6,7]. Obviously enough, the number of cases included in the study strongly affects the economic cost. However, whereas the expense for conducting large RCTs is growing steadily, a parallel increase in the quantity of evidence generated is lacking[8]. Frequently, this is the result of a tradeoff between the level of evidence and the costs sustained. Indeed, surrogate endpoints of the studies are used in order to limit the economic burden.

However, it is statistically incorrect to draw conclusions on secondary endpoints, since usually the study should be powered to detect potential differences only in primary outcomes. In addition to the economic issue, RCTs performed on selected population samples may not be fully reliable, should the results be extended to the wide population of patients seen in daily clinical practice.

Indeed, in RCTs, all procedures are strictly codified by protocols, which are rigorously followed during the study, ensuring standardized and controlled conditions. The population of patients enrolled is specifically selected by discrete inclusion and exclusion criteria. This setting is designed to control variability and improve the quality of data, but inevitably differs from what happens in daily clinical practice, making the generalizability of RCTs questionable.

An easy and popular tool to overcome the problem of small and underpowered RCTs is pooling together several trials and perform a meta-analysis on cumulated sample size[9]. However, meta-analyses have two major problems: heterogeneity and selection bias. The RCTs included in the meta-analysis should have very similar characteristics to allow pooling: among the others, inclusion and exclusion criteria, methods, and endpoints. Several outcomes have been considered by researchers to evaluate the efficacy of new interventions. However, there is still a lack of consensus on which should be the most relevant. Widely used are positive pregnancy tests, implantation rate, clinical pregnancy rate, ongoing pregnancy rate, live birth rate, cumulative live birth rate, and a recently introduced marker of success proposed by the POSEIDON group, that is, the number of oocytes required to have at least one euploid embryo for transfer in each patient[10]. Given this amount of heterogeneous and not comparable outcomes, a standardized approach for reporting data in all trials dealing with Reproductive Medicine has been recognized as a priority for future infertility research. As a matter of fact, an international consensus has been developed for this purpose[11–13].

Pooling different studies together may introduce some bias. The selection bias arises from researchers’ tendency to publish only “positive” studies that show significant result. “Negative” studies showing no significant difference are either stopped before publication by their own authors, published in minor, not indexed journals, or not published at all. Thus, new trials are designed without being aware of previous “negative” studies. To overcome this problem, the registration of RCTs on clinical trial registries has become more common and is often requested. Reporting of pre-specified outcomes also avoids the possibility to change study endpoints after data analysis[14].

So far, however, few meta-analysis are able to draw truly reliable conclusions, whereas the great majority are not clinically useful at all[14].

Given these critical issues about traditional studies, in particular the need for a large number of observations to overcome the wide variability among patients, new approaches such as Real-World Data and Real-World Evidence (RWE) are gaining momentum in health care decision-making. RWE is the evidence obtained analyzing data collected from daily clinical practice, in the whole and unselected patient population. RWE may be complementary to the evidence obtained by traditional clinical trials[8]. Real-world data are collected from several sources, including electronic health records, insurers, medical product and disease registries, and data gathered through patients’ devices and software applications[14,15]. Since more and more electronic database are made available, it is possible to create massive datasets with large populations and long-term follow-up[14].

However, these new data sources raise concerns about the quality, accuracy, and reliability of the information collected, since they are usually not recorded for research purposes. Furthermore, ethical issues – for example, lack of informed consent, possible identification of patients – and difficulties in data linkage stand out among the most concerning questions[14].

In Reproductive Medicine, large datasets are collected from databases of IVF Clinics, large multicenter databases, and national and supra-national IVF registries (when available). The large amount of observational data from the real world, provided by those database, might represent a faster, easier, and cheaper option than large-sized and well-designed RCTs, when the goal is to evaluate effectiveness and safety of new treatments and technologies[2].

RWE may potentially provide relevant information about each aspect of ART, and may be applied to several patient subgroups, which could go undetected in the limited and selected samples of conventional clinical trials.

Of note, RWE can be produced by different study designs or analyses, either retrospectively or prospectively, ranging from observational studies to those that incorporate planned interventions, with or without randomization. The selection of an appropriate analytic approach is needed because inadequate methodology could result in poorly conceived studies, which generate incorrect or unreliable conclusions[8]. The collaboration of biostatisticians and epidemiologists able to use sophisticated statistical methods is thus essential, and a cautious interpretation of the results is mandatory[2,14].

In conclusion, accumulating truly new knowledge in the field of Reproductive Medicine seems to be a slow process[5]. There is mounting evidence of the difficulties in obtaining high-quality evidence from traditional clinical trials, because of their huge economic cost and their complex methodology. The solution cannot be found in performing meta-analyses including small trials, underpowered to detect a significant difference in outcome measures. Future research in infertility should focus on quality, reproducibility, and clinical utility[14]. A contribute will most likely come from the analysis of big data and generating RWE[8].

References

[1]. Martins WP, Niederberger C, Nastri CO, et al. Making evidence-based decisions in reproductive medicine. Fertil Steril. 2018;110(7):1227–1230. doi:10.1016/j.fertnstert.2018.08.010.
[2]. Griesinger G. Is progress in clinical reproductive medicine happening fast enough? Ups J Med Sci. 2020;125(2):65–67. doi:10.1080/03009734.2020.1734991.
[3]. Stocking K, Wilkinson J, Lensen S, et al. Are interventions in reproductive medicine assessed for plausible and clinically relevant effects? A systematic review of power and precision in trials and meta-analyses. Hum Reprod. 2019;34(4):659–665. doi:10.1093/humrep/dez017.
[4]. Nardo L, Chouliaras S. Adjuvants in IVF-evidence for what works and what does not work. Ups J Med Sci. 2020;125(2):144–151. doi:10.1080/03009734.2020.1751751.
[5]. Holte J, Brodin T. Are we looking under the lamp although we know the lost key is somewhere else? Or is it just about the egg? Ups J Med Sci. 2020;125(2):200–203. doi:10.1080/03009734.2020.1755398.
[6]. Evers JLH. The wobbly evidence base of reproductive medicine. Reprod Biomed Online. 2013;27(6):742–746. doi:10.1016/j.rbmo.2013.06.001.
[7]. Glasziou P, Chalmers I, Rawlins M, et al. When are randomised trials unnecessary? Picking signal from noise. BMJ. 2007;334(7589):349–351. doi:10.1136/bmj.39070.527986.68.
[8]. Sherman RE, Anderson SA, Dal Pan GJ, et al. Real-world evidence – what is it and what can it tell us? N Engl J Med. 2016;375(23):2293–2297. doi:10.1056/NEJMsb1609216.
[9]. Wilkinson J, Bhattacharya S, Duffy J, et al. Reproductive medicine: still more ART than science? BJOG. 2019;126(2):138–141. doi:10.1111/1471-0528.15409.
[10]. Humaidan P, Alviggi C, Fischer R, et al. The novel POSEIDON stratification of “Low prognosis patients in Assisted Reproductive Technology” and its proposed marker of successful outcome. F1000Res. 2016;5:29112911. doi:10.12688/f1000research.10382.1.
[11]. Duffy JMN, AlAhwany H, Bhattacharya S, et al. Developing a core outcome set for future infertility research: an international consensus development study. Hum Reprod. 2020;35(12):2725–2734. doi:10.1093/humrep/deaa241.
[12]. Duffy JMN, Bhattacharya S, Bhattacharya S, et al. Standardizing definitions and reporting guidelines for the infertility core outcome set: an international consensus development study. Hum Reprod. 2020;35(12):2735–2745. doi:10.1093/humrep/deaa243.
[13]. Duffy JMN, Adamson GD, Benson E, et al. Top 10 priorities for future infertility research: an international consensus development study. Hum Reprod. 2020;35(12):2715–2724. doi:10.1093/humrep/deaa242.
[14]. ESHRE Capri Workshop Group. Protect us from poor-quality medical research. Hum Reprod. 2018;33(5):770–776. doi:10.1093/humrep/dey056.
[15]. FDA. Real-world evidence 2022. Available from: https://www.fda.gov/science-research/science-and-research-special-topics/real-world-evidence [Accessed May 26, 2022].
Copyright © 2022 Reproductive and Developmental Medicine, Published by Wolters Kluwer Health, Inc.