The coronavirus disease 2019 (COVID-19) pandemic, caused by the severe acute respiratory syndrome coronavirus 2 (SARS-CoV-2), is severely straining several health systems throughout the world. Timely identification and isolation of COVID-19–positive patients with mild disease are proving to be the key to effectively containing the spread of the infection.1 Whereas reverse transcriptase polymerase chain reaction (RT-PCR) is the gold standard for the diagnosis of COVID-19,2 many areas throughout the world are facing the challenge of relative shortage, or even unavailability in some regions, of nasopharyngeal testing kits.3
Although chest computed tomography (CT) has initially been advocated as a tool for diagnosing patients with COVID-19, there remains a paucity of robust diagnostic accuracy data.4 Notably, although initial results showed the high sensitivity of CT in identifying COVID-19, recent data point to its low specificity [37%; 95% confidence interval (CI), 26%–50%] and positive predictive value (PPV) (1.5%–30.7%), suggesting its limited diagnostic yield.5 As a result of the imaging findings overlapping with other viral infections and noninfectious conditions, professional organizations have suggested not to use CT for COVID-19 in patients with suspected infection and only mild clinical features, unless they are at heightened risk for respiratory compromise.6
A major factor potentially confounding the existing literature on the diagnostic yield of CT for COVID-19 identification is that RT-PCR results from a single swab were used as reference standard in several studies, as summarized by Kim et al5 in a recent meta-analysis, thus not accounting for the initial false-negative (FN) cases that subsequently tested positive. In an attempt to provide a higher level of evidence potentially supporting the use of CT for COVID-19 identification, we pooled data from the existing literature by identifying only patients who underwent chest CT with available repeated RT-PCR testing or confirmed true-negative (TN) state. Confirmatory testing for the initial results of the RT-PCR renders a more solid ground truth as proof of SARS-CoV-2 infection.
The purpose of our study was to perform a systematic review and meta-analysis assessing the diagnostic yield of CT for the identification of COVID-19, using repeated RT-PCR testing or confirmed TN state as reference standard for SARS-CoV-2 infection.
MATERIALS AND METHODS
No institutional review board approval was required for this systematic review and meta-analysis. This systematic review and meta-analysis was designed using the guidelines outlined by the Preferred Reporting Items for a Systematic Review and Meta-Analysis of Diagnostic Test Accuracy Studies (PRISMA-DTA),7 and the Cochrane Handbook of Diagnostic Test Accuracy Reviews .8 Our meta-analysis review protocol was not published or registered in advance. No external funding was received for this work.
Search Strategy
In May 2020, we performed a systematic search of the existing literature across the MEDLINE (United States National Library of Medicine), Embase (Elsevier), and Cochrane Central Register of Controlled Trials (CENTRAL) databases, with the goal of identifying studies that assessed the diagnostic accuracy of CT in patients with suspected SARS-CoV-2 infection, using repeated RT-PCR testing as reference standard. We performed a free-text construction of the search string by using the following keywords, which were combined using “OR” and “AND”: Coronavirus, COVID-19, 2019-nCoV, SARS-CoV-2, Computed Tomography, and CT. The COVID-19 Resources online page of the Radiological Society of North America (https://www.rsna.org/covid-19 ) was also reviewed to identify any additional eligible article. In addition, the reference list from the selected studies and relevant review articles were manually cross-checked. The last online search was performed on May 19, 2020. A citation manager software (EndNote X9.3.1; Thomas Reuters, New York, New York) was used to filter duplicate findings. Detailed search strategies for each database are described in Table 1 .
TABLE 1 -
Search Strategy
Database
Last Access
Search Strategy
Results
MEDLINE
May 19, 2020
(coronavirus OR Covid-19 OR 2019-nCoV OR SARS-CoV-2 OR covid) AND (CT OR “computed tomography)
n = 848
Embase
May 19, 2020
(coronavirus OR Covid-19 OR 2019-nCoV OR SARS-CoV-2 OR covid) AND (CT OR “computed tomography)
n = 1122
CENTRAL
May 19, 2020
(coronavirus OR Covid-19 OR 2019-nCoV OR SARS-CoV-2 OR covid) AND (CT OR “computed tomography)
n = 339
Strategies for Electronic Searches of Database: MEDLINE (US National Library of Medicine), Embase (Elsevier), and Cochrane Central Register of Controlled Trials (CENTRAL).
Study Selection
The potentially eligible articles were prescreened by 2 investigators (N.P. and D.B., with 5 and 9 years of experience, respectively) by looking at the title and abstract. In particular, studies were selected based on the presence of the search terms in the article's title or abstract. We limited the study selection to human subjects only. Nonrelevant articles, review articles, case reports, commentaries, or letters were excluded. The full text for the remaining articles was obtained. Discrepancies regarding potential eligibility and inclusion were solved by consensus. The authors who performed the search were not blinded to the authors' identifiers or the journals' information. To be eligible for the quantitative analyses, a study needed to I) assess the use of chest CT in patients with suspected COVID-19, II) provide sufficient data to (re)construct a 2 × 2 contingency table to calculate the diagnostic accuracy, and III) possess a stringent reference standard to confirm the imaging-based diagnosis, including at least 1 short-term follow-up RT-PCR assay confirming the initial RT-PCR results, in particular for patients negative at the first swab. Studies in which a different reference standard had been used to define unequivocally the TN patients were also included, such as the isolation of the pathogen from the lungs. Studies in which patients represented subsets of patients from another included article were excluded.
Data Extraction and Quality Assessment
Relevant data for the included studies were independently extracted by 2 authors (N.P. and D.B.) using an electronic extraction form. Differences in data collection, if any, were resolved by discussion and consensus with a third author (I.C., 20 years of experience in cardiothoracic imaging), referring to the original article. From each primary study, the following characteristics were extracted: study design, country where the research was performed, single versus multicenter study, primary outcome, all reference tests [number of repeated swabs, time between imaging and reference standard, total number of true-positive (TP), false-positive (FP), TN, and FN cases]. Patient characteristics, including number of patients enrolled, patient's age, and patient's sex, and imaging characteristics were also collected.
The methodological rigor of the included studies and potential for bias were evaluated independently by 2 authors (N.P. and D.B.), using the items from a customized Quality Assessment of Diagnostic Accuracy Studies–2 tool (QUADAS-2).9 For each domain (ie, patient selection, index test, reference test, and patient flow), risks of bias and applicability concerns were rated as low (1), high (0), or unclear (0.5). Disagreement was resolved by consensus discussion with a third senior author (I.C., with 20 years of experience). The risk for bias across studies (ie, “publication bias”) was not assessed because there is no generally accepted method for this task and the number of included studies was low.10
Summary Measures and Statistical Methods
The primary end point in this systematic review and meta-analysis was to evaluate sensitivity, specificity, positive likelihood ratio, negative likelihood ratio, summary diagnostic odds ratio (SDOR), and summary receiver operating characteristic (SROC) of CT for the identification of COVID-19. Data on numbers of TP, FN, TN, and FP findings were used to calculate a pooled sensitivity and specificity along with 95% CIs for chest CT, using random- and fixed-effects models according to heterogeneity. In studies reporting data from multiple readers, results from each reader were handled as individual studies.11 For studies reporting multiple definitions of positivity of the index test, the one with the highest Youden index was considered for the meta-analysis.12
For each analysis, we calculated effect size, reported as Z value, and percentage heterogeneity between studies, computing I 2 values. Between-study heterogeneity was analyzed using the following equation: I 2 = [(χ − df )/χ ] × 100%, where χ is the chi-square statistic and df represented the degree of freedom. I 2 values equal to 25%, 50%, and 75% were assumed to represent low, moderate, and high heterogeneity. These values describe the percentage variability in effect estimates resulting from heterogeneity rather than sampling error (chance).
A meta-regression analysis was performed to examine the effects of the following confounding factors on pooled SDOR: country of origin (China vs outside China) and prevalence of disease in the population enrolled in the studies included (>40% vs <40%). Given that combining PPV and negative predictive value (NPV) can lead to error in the setting of a meta-analysis,13 we averaged the data of these parameters, stratifying studies according to prevalence of disease (>40% and <40%).
A P value less than 0.05 was used as threshold for statistical significance for all analyses. Data were analyzed by using Comprehensive Meta Analyses (version 2.2.064, Biostat, Englewood, New Jersey), Excel 365 (Microsoft Corp, Redmond, Washington), and MetaDiSc (version 1.4, Hospital Ramon y Cajal and Universidad Complutense de Madrid, Madrid, Spain) packages.
RESULTS
Study Selection
The PRISMA flow diagram is portrayed in Figure 1 . Our literature search retrieved 2309 articles. Nine additional studies were identified by checking the COVID-19 Resources web page of the Radiological Society of North America. After removing duplicates, 1695 were considered for screening. A total of 1677 studies were excluded based on title and abstract screening. Eighteen articles were selected for full-text extraction, with 10 studies that were finally included for qualitative and quantitative data synthesis.22–31 The specific reasons for exclusion during the full-text screening phase are provided in Table 2 .15–21
FIGURE 1: The PRISMA flow diagram demonstrates the process for selecting studies that were included in the meta-analysis. *Eight full-text articles were excluded. Reasons for exclusions are provided in
Table 2 .
TABLE 2 -
Studies Excluded After Full-Text Review
First Author
Year of Publication
Journal
Reason for Exclusion
Ai et al14
2020
Radiology
Insufficient data to reconstruct 2 × 2 contingency table
Chen et al15
2020
Eur Radiol.
Insufficient data to reconstruct 2 × 2 contingency table
Li et al16
2020
Eur Radiol.
Insufficient data to reconstruct 2 × 2 contingency table
Liu et al17
2020
Eur Radiol.
Insufficient data to reconstruct 2 × 2 contingency table
Long et al18
2020
Eur J Radiol.
No patients without COVID-19 enrolled
Wang et al19
2020
Eur Radiol.
Insufficient data to reconstruct 2 × 2 contingency table
Zhao et al20
2020
Clin Infect Dis.
No patients without COVID-19 enrolled
Zhifeng et al21
2020
J Clin Virol.
Insufficient data to reconstruct 2 × 2 contingency table
Studies excluded after full-text review with reasons for exclusion.
Data Extraction
Of the 10 eligible studies, 6 were conducted in a single-center setting,23–28 whereas 4 were performed in a multicenter setting.22,29–31 Six studies were performed in China,24,26,28–31 1 both in China and in the United States,22 1 in Belgium,25 1 in Italy,23 and 1 in Japan.27 One study was prospective,23 whereas 9 were retrospectively designed.22,24–31 The 10 studies eligible for meta-analysis involved 1332 patients (age, mean ± SD, 50 ± 8 years). The median study sample size was 103 patients (range, 21–424 patients). The total number of patients affected by COVID-19 was 565 (age, mean ± SD, 54 ± 7 years). The total number of patients negative for COVID-19 infection was 637 (age, mean ± SD, 48 ±13 years).
All CT images were assessed using the Fleischner Society lexicon as reference.32 The major CT findings reported were ground-glass opacity, consolidation, reticulation/thickened interlobular septa, subsegmental vessel enlargement (>3 mm), spider web sign, crazy-paving pattern, and reverse halo sign (Table 3 ).
TABLE 3 -
Characteristics of Included Studies
First Author
Year of Publication
Journal
Design
Country
Scanner
CM
Main CT Findings
Mean Time Interval Between RT-PCR
Mean Age
No. Patients
Prevalence
TP
TN
FP
FN
Bai et al22
2020
Radiology
Retrospective
China
NR
NR
Peripheral distribution, GGO, fine reticular opacity, vascular thickening, and reverse halo sign
Negative patients selected from a cohort of patients with viral pneumonia from 2017 to 2019
45–65
424
52%
(R1) 158
(R1) 192
(R1) 13
(R1) 61
(R2) 157
(R2) 181
(R2) 24
(R2) 62
(R3) 206
(R3) 49
(R3) 156
(R3) 13
Caruso et al23
2020
Radiology
Prospective
Italy
Revolution EVO (GE)
No
Peripheral GGO, multilobe and posterior involvement, bilateral distribution, subsegmental vessel enlargement (>3 mm)
24 h
54
158
39%
60
54
42
2
Cheng et al24
2020
AJR
Retrospective
China
LightSpeed Pro16 (GE)
No
GGO, consolidation
NR
50–43
33
33%
11
2
20
0
LightSpeed VCT (GE)
uCT 528 (United Imaging)
Dangis et al25
2020
Radiology
Retrospective
Belgium
Somatom Definition AS 64-slice
No
Multiple GGOs, bilateral/multifocal involvement, peripheral distribution and, at a later stage, crazy-paving, consolidation and reversed halo sign
24 h
62
192
43
72
102
7
11
He et al26
2020
Resp Med.
Retrospective
China
NR
No
Ground-glass opacification with or without consolidation, crazy-paving pattern, peripheral and diffuse distribution, and bilateral/multilobular involvement
NR
52
82
41%
26
46
2
8
Himoto et al27
2020
Japanese Journal of Radiology
Retrospective
Japan
Aquillion CX (Canon), Aquillion ONE (Canon)
No
Bilateral GGO and peripheral-predominant lesions without airway abnormalities, nodules, mLN, and pleural effusion
RT-PCR negative patients proved to be affected by different infection
58–66
21
28%
(R1) 4
(R1) 14
(R1) 1
(R1) 2
(R2) 5
(R2) 12
(R2) 3
(R2) 1
Luo et al28
2020
BMC Pulm Med.
Retrospective
China
Lightspeed 16-detector (GE), Somatom Definition AS (Siemens)
No
Bilateral lower lobe peripheral involvement, Subpleural bandlike, GGO, crazy-paving pattern GGO +/− consolidation
NR
44
73
41%
26
29
14
4
Miao et al29
2020
American Journal of Emergency Medicine
Retrospective
China
Optima 670 CT scanner (GE), Revolution Frontier (GE), SOMATOM Definition Flash (Siemens)
GGO, consolidation, crazy-paving pattern
1 d
42
130
41%
31
61
15
23
Wen et al30
2020
Radiology
Retrospective
China
Brilliance 16-detector (Philips), Brilliance 128-detector (Philips), Lightspeed 16-detector (GE)
No
Pneumonia, including GGO and consolidation
1–3 d
46
103
85%
82
8
7
6
Zhu et al31
2020
Journal of Medical Virology
Retrospective
China
NR
NR
Pneumonia, including GGO, consolidation, spider web sign, and crazy-paving pattern
24 h
40
116
27%
30
28
56
2
Study and patient characteristics of the included studies.
CM indicates contrast media; GGO, ground-glass opacity; mLN, mediastinal lymphadenopathy; NR, not reported.
Diagnostic Accuracy of CT
Pooled diagnostic accuracy data were as follows: sensitivity, 82% (95% CI, 79%–84%; I 2 = 88.6%); specificity, 68% (95% CI, 65%–71%; I 2 = 97.4%); and SDOR, 18 (95% CI, 9.8%–32.8; I 2 = 74.5%). The summary positive likelihood ratio was 3.48 (95% CI, 2.03%–5.98%; I 2 = 97.8%), whereas the summary negative likelihood ratio was 0.25 (95% CI, 0.19%–0.33%; I 2 = 63.6%) (Figs. 2, 3 ).
FIGURE 2: Diagnostic accuracy of chest CT for COVID-19 identification. Forest plots of pooled sensitivity (top left), pooled specificity (bottom left), and SROC curve (right). In the Forest plots, horizontal lines show CIs for individual studies, and asterisks and vertical dashed lines show pooled value of sensitivity or specificity with corresponding CIs. In the SROC curve, AUC indicated area under the curve; SE, standard error; Q*, Q* index. Figure 2 can be viewed online in color at
www.jcat.org .
FIGURE 3: Diagnostic accuracy of chest CT for COVID-19 identification. Forest plots of pooled likelihood ratio (LR) (top), pooled negative LR (middle), and diagnostic odds ratio (OR) (bottom). In the Forest plots, horizontal lines show CIs for individual studies, and asterisks and vertical dashed lines show pooled value of positive LR, negative LR, or diagnostic OR with corresponding CIs. Figure 3 can be viewed online in color at
www.jcat.org .
By pooling data according to COVID-19 prevalence (<40% and >40%), PPV and NPV were 54% (95% CI, 30%–77%) and 94% (95% CI, 88%–99%) at a COVID-19 prevalence below 40%. By comparison, PPV and NPV were 80% (95% CI, 62%–91%) and 77% (95% CI, 68%–85%) at a COVID-19 prevalence higher than 40%.
Our meta-regression analyses showed that relative diagnostic odds ratio was 2.65 (95% CI, 0.82–8.58) for prevalence of disease higher than 40% in the enrolled population, compared with 3.76 (95% CI, 1.13–12.52) when the country of origin was China. These effects, however, did not significantly affect SDOR (P > 0.05; for all comparisons).
Quality Assessment
Results from risk of bias assessment for the 10 included studies are illustrated in Figure 4 . Four studies were considered to be at low risk of bias in all domains.25,28,30,31 Four studies received a high risk of bias in the index test domain because they did not provide a definition of positivity for the test used for the interpretation of chest CT findings.22–24,26 The study from Bai et al22 was deemed at high risk of bias in the patient selection category because it was a case-control designed study, and patients with COVID-19 and no abnormalities on CT images were excluded. In all included studies, it was not reported whether the readers who interpreted the reference standard were blinded to the results of the index test. One study received an unclear risk of bias in the reference standard and flow or timing domains because a subgroup of participants, who were proved to be infected with pathogens other than SARS-CoV-2, did not undergo RT-PCR testing to rule out a concomitant positivity for SARS-CoV-2.27 No applicability concerns were raised for all included studies. Detailed results from the quality assessment of each included study based on the customized QUADAS-2 Assessment Tool are presented in Table 4 .
FIGURE 4: Bar charts show risk of bias (left) and concerns of applicability (right) for 8 included studies combined using the QUADAS-2 tool.
TABLE 4 -
Quality Assessment of Included Studies
First Author
Year
Journal
Risk of Bias
Applicability Concerns
Patient Selection
Index Test
Reference Standard
Flow and Timing
Patient Selection
Index Test
Reference Standard
Total
Total Applicable
% of Total Applicable
Item 1*
Item 2†
Item 3‡
Item 4§
Item 5∥
Item 6¶
Item 7#
Item 8**
Item 9††
Item 10‡‡
Item 11§§
Item 12∥∥
Item 13¶¶
Bai et al22
2020
Radiology
1
0
0
1
0
1
0.5
1
0
1
1
1
1
8.5
13
65
Caruso et al23
2020
Radiology
1
1
1
0.5
0
1
0.5
1
1
1
1
1
1
11
13
85
Cheng et al24
2020
AJR Am J Roentgenol.
1
1
0
0.5
0
1
0.5
0.5
1
1
1
1
1
9.5
13
73
Dangis et al25
2020
Radiology: Cardiothoracic Imaging
1
1
1
1
1
1
0.5
0.5
1
1
1
1
1
12
13
92
He et al26
2020
Respir Med.
0.5
1
1
1
0
1
0.5
1
1
1
1
1
1
11
13
85
Himoto et al27
2020
Jpn J Radiol.
1
1
1
1
1
0.5
0.5
0.5
0
1
1
1
0.5
10
13
77
Luo et al28
2020
BMC Pulm Med.
0.5
1
1
1
1
1
0.5
1
1
1
1
1
1
12
13
92
Miao et al29
2020
Am J Emerg Med.
1
1
1
0.5
1
1
0.5
1
1
1
1
1
1
12
13
92
Wen et al30
2020
Radiology: Cardiothoracic Imaging
0.5
1
1
1
1
1
0.5
1
1
1
1
1
1
12
13
92
Zhu et al31
2020
J Med Virol.
1
1
1
1
1
1
0.5
1
1
1
1
1
1
12.5
13
96
Detailed scores of the 13 included studies for the customized QUADAS-2 domains. For each item, risks of bias and applicability concerns were rated as low (score of 1), high (0), or unclear (0.5).
*Item 1: Was a consecutive or random sample of patients enrolled?
† Item 2: Was a case-control design avoided?
‡ Item 3: Did the study avoid inappropriate exclusions?
§ Item 4: Were the index test results interpreted without knowledge of the results of the reference standard?
∥ Item 5: Was a definition of positive CT findings provided?
¶ Item 6: Is the reference standard likely to correctly classify the target condition?
# Item 7: Were the reference standard results interpreted without knowledge of the results of the index test?
**Item 8: Was there an appropriate interval between the index test and reference standard?
†† Item 9: Did all patients receive the same reference standard?
‡‡ Item 10: Were all patients included in the analysis?
§§ Item 11: Are there concerns that the included patients and setting do not match the review question?
∥∥ Item 12: Are there concerns that the index test, its conduct, or interpretation differ from the review question?
¶¶ Item 13: Are there concerns that the target condition as defined by the reference standard does not match the question?
DISCUSSION
Our systematic review and meta-analysis of pooled data from 1332 patients undergoing chest CT with repeated RT-PCR testing or confirmed TN state as reference standard indicates that the specificity and PPV of CT for COVID-19 identification at low disease prevalence are higher than previously thought, albeit with lower sensitivity. Of note, pooled sensitivity and specificity values of CT were 82% (95% CI, 79%–84%) and 68% (95% CI, 65%–71%), respectively. By analyzing pooled data by disease prevalence (ie, lower than 40% vs greater than 40%), we found that the PPV and NPV of CT were 54% (95% CI, 30%–77%) and 94% (95% CI, 88%–99%) at disease prevalence less than 40%. The PPV and NPV of CT were 80% (95% CI, 62%–91%) and 77% (95% CI, 68%–85%) at disease prevalence higher than 40%. Our data confirm the high degree of heterogeneity (I 2 Sensitivity, 88.6%; I 2 Specificity, 97.4%) among currently published studies. Of note, our meta-regression highlighted the effects of nation and disease prevalence as potential explanatory factors of heterogeneity, denoting that diagnostic performance (represented by the SDOR) is higher in regions such as China, with prevalence of disease higher than 40%.
Our results differ from those of the meta-analysis recently performed by Kim et al,5 who found that the average specificity of CT was 37% (vs an average 68% in our study) with a PPV at low disease prevalence in the range of 1.5% to 30.7% [vs an average of 54% (95% CI, 30%–77%) in our study]. The differences between our meta-analysis and Kim et al's study are related to multiple factors. Kim et al5 had data available from only 4 studies to calculate the specificity,14,23,24,31 whereas data from 10 studies22–31 were used in our meta-analysis. In addition, Kim et al5 included all 1014 patients from the study by Ai et al,14 even though only 258 patients had repeated RT-PCR confirming the initial result. In contrast, we did not include the population from the study by Ai et al14 due to the lack of sufficient data to construct the 2 × 2 contingency table. We hypothesize that these factors contributed to higher specificity but lower sensitivity of our pooled data compared with the study by Kim et al.5 A similar approach was used in a more recent meta-analysis by Xu et al,33 in which only 2 studies (including the article by Ai et al)14,31 were used to compute the specificity of CT.
Our study had notable limitations. First, for this meta-analysis, we could only include a relatively small number of studies existing at the time of this writing effort. Nevertheless, it should be noted that the number of included studies is similar to the average number of eligible studies for meta-analyses in the medical field,34 without major applicability concerns. Second, the results of the meta-regression analysis could have been affected by the limited number of studies included. One could, however, advocate that this limitation was mitigated by the inclusion of the effect of disease prevalence during our analyses of predictive values. In addition, the heterogeneity among studies, for instance, the choice of CT findings used to define a positive CT examination, could have potentially led to an inclusion bias into the index test, limiting the value of diagnostic accuracy. Furthermore, we handled results obtained from multiple readers as individual studies. Nevertheless, considering the lack of specific recommendations for handling multiple readers in meta-analyses, we believe that our approach is the safest to mitigate the risk of diagnostic accuracy overestimation.11 Finally, it could be argued that 2 studies included in our meta-analysis were designed differently, and patients did not undergo repeated RT-PCR. However, the reference standard to prove the TN state was even more robust in these 2 studies,22,27 including direct isolation of a different pathogen from infected patients' lungs27 and use of a cohort of patients with viral pneumonia before the commencement of the COVID-19 outbreak.22
In conclusion, our systematic review and meta-analysis pooling data from patients undergoing CT and having repeated RT-PCR testing or confirmed TN state as reference standard indicates that in regions with lower than 40% COVID-19 prevalence, chest CT yields specificity and PPV higher than previously reported, albeit a lower sensitivity. Although CT should not be used as a first-line screening test for COVID-19, in the scenario of a global pandemic, it may be of aid in identifying patients with high pretest probability for SARS-CoV-2 infection when the patient's COVID-19 status is unknown and RT-PCR is not available or pending. Circumspect assessment of the burgeoning knowledge and rigorous adoption of epidemiological methodology remain key factors toward attainment of higher level of evidence on the role of CT in the evaluation of patients with COVID-19. Future research on the use of CT for assessment of patients with suspected SARS-CoV-2 infection should place emphasis on the ability of CT in differentiating between COVID-19 and other pulmonary pathologies in symptomatic patients, rather than attempting to adopt CT as substitute or a surrogate for RT-PCR testing.
REFERENCES
1. Identification of healthcare workers and inpatients with suspected COVID-19 in non–US healthcare settings | CDC [internet]. Available at:
https://www.cdc.gov/coronavirus/2019-ncov/hcp/non-us-settings/guidance-identify-hcw-patients.html . Accessed May 12, 2020.
2. CDC 2019-nCoV real-time RT-PCR diagnostic panel (CDC)—manufacturer instructions/package insert | FDA [internet]. Available at:
https://www.fda.gov/media/134922/ . Accessed May 12, 2020.
3. Addressing shortages of personal protective equipment (PPE) > Washington State Department of Health [internet]. Available at:
https://www.doh.wa.gov/Newsroom/Articles/ID/1117/Addressing-shortages-of-Personal-Protective-Equipment-PPE . Accessed May 12, 2020.
4. Eng J, Bluemke DA. Imaging publications in the COVID-19 pandemic: applying new research results to clinical practice.
Radiology . 2020;201724.
5. Kim H, Hong H, Yoon SH. Diagnostic performance of CT and reverse transcriptase polymerase chain reaction for coronavirus disease 2019: a meta-analysis.
Radiology . 2020;296:E145–E155.
6. Rubin GD, Haramati LB, Kanne JP, et al. The role of chest imaging in patient management during the COVID-19 pandemic: a multinational consensus statement from the Fleischner Society.
Radiology . 2020;201365.
7. McInnes MDF, Moher D, Thombs BD, et al. Preferred reporting items for a systematic review and meta-analysis of diagnostic test accuracy studies: the PRISMA-DTA statement.
JAMA . 2018;319:388–396.
8. Handbook for DTA reviews | Cochrane screening and diagnostic tests [internet]. Available at:
https://methods.cochrane.org/sdt/handbook-dta-reviews . Accessed May 12, 2020.
9. Whiting PF, Rutjes AW, Westwood ME, et al. Quadas-2: a revised tool for the Quality Assessment of Diagnostic Accuracy Studies.
Ann Intern Med . 2011;155:529–536.
10. McInnes MD, Bossuyt PM. Pitfalls of systematic reviews and meta-analyses in imaging research.
Radiology . 2015;277:13–21. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/26402491 . Accessed May 12, 2020.
11. McGrath TA, McInnes MDF, Langer FW, et al. Treatment of multiple test readers in diagnostic accuracy systematic reviews-meta-analyses of imaging studies.
Eur J Radiol . 2017;93:59–64.
12. McGrath TA, McInnes MD, Korevaar DA, et al. Meta-analyses of diagnostic accuracy in imaging journals: analysis of pooling techniques and their effect on summary estimates of diagnostic accuracy.
Radiology . 2016;281:78–85.
13. Chapter 8: meta-analysis of test performance when there is a “gold standard” | effective health care program [internet]. Available at:
https://effectivehealthcare.ahrq.gov/products/methods-guidance-tests-metaanalysis/methods . Accessed May 12, 2020.
14. Ai T, Yang Z, Hou H, et al. Correlation of chest CT and RT-PCR testing for coronavirus disease 2019 (COVID-19) in China: a report of 1014 cases.
Radiology . 2020;296:E32–E40.
15. Chen X, Tang Y, Mo Y, et al. A diagnostic model for coronavirus disease 2019 (COVID-19) based on radiological semantic and clinical features: a multi-center study.
Eur Radiol . 2020;30:4893–4902.
16. Li X, Fang X, Bian Y, et al. Comparison of chest CT findings between COVID-19 pneumonia and other types of viral pneumonia: a two-center retrospective study.
Eur Radiol . 2020;1–9 Available at:
http://www.ncbi.nlm.nih.gov/pubmed/32394279 . Accessed May 20, 2020.
17. Liu M, Zeng W, Wen Y, et al. COVID-19 pneumonia: CT findings of 122 patients and differentiation from influenza pneumonia.
Eur Radiol . 2020.
18. Long C, Xu H, Shen Q, et al. Diagnosis of the coronavirus disease (COVID-19): rRT-PCR or CT?
Eur J Radiol . 2020;126:108961.
19. Wang H, Wei R, Rao G, et al. Characteristic CT findings distinguishing 2019 novel coronavirus disease (COVID-19) from influenza pneumonia.
Eur Radiol . 2020;30:4910–4917.
20. Zhao D, Yao F, Wang L, et al. A comparative study on the clinical features of coronavirus 2019 (COVID-19) Pneumonia With Other Pneumonias.
Clin Infect Dis . 2020;71:756–761.
21. Zhifeng J, Feng A, Li T. Consistency analysis of COVID-19 nucleic acid tests and the changes of lung CT.
J Clin Virol . 2020;127:104359. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/32302956 . Accessed May 12, 2020.
22. Bai HX, Hsieh B, Xiong Z, et al. Performance of radiologists in differentiating COVID-19 from viral pneumonia on chest CT.
Radiology . 2020;200823.
23. Caruso D, Zerunian M, Polici M, et al. Chest CT features of COVID-19 in Rome, Italy.
Radiology . 2020;296:E79–E85.
24. Cheng Z, Lu Y, Cao Q, et al. Clinical features and chest CT manifestations of coronavirus disease 2019 (COVID-19) in a single-center study in Shanghai, China.
AJR Am J Roentgenol . 2020;215:121–126.
25. Dangis A, Gieraerts C, De Bruecker Y, et al. Accuracy and reproducibility of low-dose submillisievert chest CT for the diagnosis of COVID-19.
Radiol Cardiothorac Imaging [Internet]. 2020;2:e200196. Available at:
http://pubs.rsna.org/doi/10.1148/ryct.2020200196 . Accessed May 12, 2020.
26. He J-L, Luo L, Luo Z-D, et al. Diagnostic performance between CT and initial real-time RT-PCR for clinically suspected 2019 coronavirus disease (COVID-19) patients outside Wuhan, China.
Respir Med . 2020;168:105980.
27. Himoto Y, Sakata A, Kirita M, et al. Diagnostic performance of chest CT to differentiate COVID-19 pneumonia in non-high-epidemic area in Japan.
Jpn J Radiol . 2020;38:400–406.
28. Luo L, Luo Z, Jia Y, et al. CT differential diagnosis of COVID-19 and non–COVID-19 in symptomatic suspects: a practical scoring method.
BMC Pulm Med . 2020;20:129. Available at:
http://www.ncbi.nlm.nih.gov/pubmed/32381057 . Accessed May 20, 2020.
29. Miao C, Jin M, Miao L, et al. Early chest computed tomography to diagnose COVID-19 from suspected patients: a multicenter retrospective study.
Am J Emerg Med . 2020.
30. Wen Z, Chi Y, Zhang L, et al. Coronavirus disease 2019: initial detection on chest CT in a retrospective multicenter study of 103 Chinese subjects.
Radiol Cardiothorac Imaging [Internet]. 2020;2:e200092. Available from:
http://pubs.rsna.org/doi/10.1148/ryct.2020200092 . Accessed May 12, 2020.
31. Zhu W, Xie K, Lu H, et al. Initial clinical features of suspected coronavirus disease 2019 in two emergency departments outside of Hubei, China.
J Med Virol . 2020.
32. Hansell DM, Bankier AA, MacMahon H, et al. Fleischner Society: glossary of terms for thoracic imaging.
Radiology . 2008;246:697–722.
33. Xu B, Xing Y, Peng J, et al. Chest CT for detecting COVID-19: a systematic review and meta-analysis of diagnostic accuracy.
Eur Radiol . 2020;1–8 Available at:
http://link.springer.com/10.1007/s00330-020-06934-2 . Accessed May 20, 2020.
34. Davey J, Turner RM, Clarke MJ, et al. Characteristics of meta-analyses and their component studies in the Cochrane Database of Systematic Reviews: a cross-sectional, descriptive analysis.
BMC Med Res Methodol . 2011;11:160.