Secondary Logo

Journal Logo

EDITORIAL

Evidence synthesis in radiography: current challenges and opportunities

Mander, Gordon1,2; Steffensen, Caitlin3; Munn, Zachary4

Author Information
doi: 10.11124/JBIES-20-00557
  • Free

There has been recent debate surrounding evidence-based radiography, what evidence-based practice means in this field, and whether we have achieved “evidence-based radiography.”1 If we are to achieve a true evidence-based radiographic practice, there is a need for rigorous evidence syntheses (systematic reviews) to summarize the emerging evidence base in this field.2

Well-conducted evidence syntheses summarizing emerging trends in diagnostic imaging is vital to understanding the value of an imaging investigation, technique, or intervention. As consumer expectations drive demand for highly accurate and immediately available testing strategies, the discerning use of high-value equipment through synthesis of evidence in radiography and associated sciences is growing in importance. Although in other fields of health care the traditional review approach commonly centers on the effectiveness of interventions or therapies,3 reviews conducted in the field of radiography represent diverse approaches4-6 across the emerging systematic review typology.3 This includes space to develop tailored review approaches, such as investigating the optimization of radiographic technique parameters.7 Particularly in radiography, reviews are important in determining the effectiveness of diagnostic imaging pathways, the accuracy of diagnostic tests, the cost-effectiveness of imaging strategies, as well as optimal relationships between diagnostic image quality and radiation dose.1 With this in mind, it is important to note some of the opportunities and challenges that are evident when synthesizing evidence of radiographic imaging tests and radiologic interventions.

Challenges with radiation dose evidence synthesis

A recently conducted systematic review by Fernandez and colleagues8 in this issue of JBI Evidence Synthesis describes the effectiveness of different radiation protection methods for interventional proceduralists. The review found that a variety of methods were useful, however, of important note, the authors suggested significant methodological heterogeneity is encountered when synthesizing evidence pertaining to both radiation protection methods and methods of radiation dose measurement.8

This was also an important finding in a systematic review conducted by Steffensen et al.7 The review describes shortcomings in the reporting of image quality and patient dose in radiography, again associated with significant methodological heterogeneity in approaches to analysis. The authors recommended that a methodological framework for radiographic acquisition factor optimization be produced as a matter of urgency.

Because there is significant methodological heterogeneity in systematic reviews investigating image quality or patient dose, meta-analysis of these study types is not generally possible. Variations in patient population, local protocol requirements, proprietary algorithms, and manufacturer-specific imaging system designs make it challenging to directly compare systems and techniques.

A large proportion of radiologic literature is focused on accurate reporting and image optimization; therefore, it could be argued that studies of treatment effects are not the raison d’être of diagnostic imaging journals. It has been suggested that randomized controlled trials are far less common in this subject area than in other health care settings, despite the need for well-conducted experimental designs.9 Where studies are concerned with the effectiveness of an imaging task at providing outcomes other than diagnostic accuracy (such as radiation exposure or image quality measures), there are a number of limitations that are commonly encountered. One example previously identified is the difficulty in blinding an observer to the imaging intervention. That is, it is difficult to blind an observer to the presence or absence of an image quality intervention such as an additional magnetic resonance imaging sequence, comparison of ultra-low dose computed tomography acquisitions, or the exclusion of oral contrast media from a scan.

A second limiting factor is the ethical challenges associated with randomized controlled trials in medical radiation science, as patients who are randomized to a control arm necessarily receive a radiation exposure without the added potential value of a particular imaging intervention. This contradicts the justification principle underlying the premise of a radiological test — that the perceived benefit to the patient outweighs the risk.

Challenges with diagnostic test accuracy synthesis

In the assessment of the accuracy of diagnostic imaging tests, there is perhaps an overreliance on observational designs to the detriment of more robust experimental designs.9 Studies of accuracy in imaging are more widely cited if they purport positive findings, leading to overestimation of the accuracy of the test.10

Accuracy studies of diagnostic imaging tests often utilize a retrospective cohort approach, as the data are easily accessible through picture archiving communication systems. However, the potential for bias increases significantly with this approach because the flow and timing of the testing strategy, and what influenced the patient receiving both an index test and reference standard, are unclear. Guidelines such as the recently updated Preferred Reporting Items for Systematic Reviews and Meta-Analyses – Diagnostic Test Accuracy (PRISMA-DTA) guidelines as well as the Standards for Reporting Diagnostic Accuracy Studies (STARD) and other methodological guidance will hopefully improve the homogeneity between studies.11

Although meta-analysis using Hierarchical Summary Receiver Operating Characteristic (HSROC) curves allows for synthesis of studies where the threshold for diagnostic confidence varies, it does not account for the fact that a diagnosis of interest may be visualized (and therefore diagnosed) more than once in a specific data set or image (eg, the detection of lung nodules on a chest x-ray).12

There is, therefore, scope for new statistical meta-analytic models that allow pooling of studies where accuracy is defined more widely to include observer performance determined beyond a traditional 2 x 2 table approach.13 Alternative approaches will need to be considered into the future as more studies aim to determine diagnostic accuracy of computer-aided reporting and deep-learning artificial intelligence algorithms.14

With the advent of deep-learning algorithms and cutting-edge functional and anatomical imaging techniques, imaging tests are likely to become more demand-driven and ingrained in routine care. High-quality evidence synthesis and meta-analysis are critical to ensure that the use of these high-value systems remains cost-effective, safe, and evidence-based. We advocate for further development and uptake of evidence synthesis approaches in this field and commend the editors of JBI Evidence Synthesis for highlighting an important exemplar review on dose optimization.8

References

1. Munn Z. Why isn’t there an evidence-based radiography? Reflections and a call to action. Radiography 2020; 26: (suppl 2): S14–S16.
2. Nightingale J. Editorial — systematic reviews in radiography. Radiography 2015; 21 (3):213.
3. Munn Z, Stern C, Aromataris E, Lockwood C, Jordan Z. What kind of systematic review should I conduct? A proposed typology and guidance for systematic reviewers in the medical and health sciences. BMC Med Res Methodol 2018; 18 (1):5.
4. Humphrey P, Bennett C, Cramp F. The experiences of women receiving brachytherapy for cervical cancer: a systematic literature review. Radiography 2018; 24 (4):396–403.
5. Munn Z, Moola S, Lisy K, Riitano D, Murphy F. Claustrophobia in magnetic resonance imaging: a systematic review and meta-analysis. Radiography 2015; 21 (2):e59–e63.
6. Younger CWE, Wagner MJ, Douglas C, Warren-Forward H. Describing ionising radiation risk in the clinical setting: a systematic review. Radiography 2019; 25 (1):83–90.
7. Steffensen C, Trypis G, Mander GTW, Munn Z. Optimisation of radiographic acquisition parameters for direct digital radiography: a systematic review. Radiography 2020; S1078-8174(20)30173-5. [epub ahead of print.].
8. Fernandez R, Ellwood L, Barrett D, Weaver J. Safety and effectiveness of strategies to reduce radiation exposure to proceduralists performing cardiac catheterization procedures: a systematic review. JBI Evid Synth 2021; 19 (1):4–33.
9. Rodger M, Ramsay T, Fergusson D. Diagnostic randomized controlled trials: the final frontier. Trials 2012; 13:137–140.
10. Treanor L, Frank RA, Salameh J-P, Sharifabadi AD, McGrath TA, Kraaijpoel N, et al. Selective citation practices in imaging research: are diagnostic accuracy studies with positive titles and conclusions cited more often? AJR Am J Roentgenol 2019; 213 (2):397–403.
11. Frank RA, Bossuyt PM, McInnes MDF. Systematic reviews and meta-analyses of diagnostic test accuracy: the PRISMA-DTA statement. Radiology 2018; 289 (2):313–314.
12. Chakraborty DP. Recent advances in observer performance methodology: jackknife free-response ROC (JAFROC). Radiat Prot Dosimetry 2005; 114 (1–3):26–31.
13. Woznitza N, Piper K, Burke S, Bothamley G. Chest x-ray interpretation by radiographers is not inferior to radiologists: a multireader, multicase comparison using JAFROC (Jack-knife Alternative Free-response Receiver Operating Characteristics) analysis. Acad Radiol 2018; 25 (12):1556–1563.
14. Liu X, Faes L, Kale AU, Wagner SK, Fu DJ, Bruynseels A, et al. A comparison of deep learning performance against health-care professionals in detecting diseases from medical imaging: a systematic review and meta-analysis. Lancet Digital Health 2019; 1 (6):e271–e297.
© 2021 JBI