Secondary Logo

Journal Logo

Web Exclusive Content: Research Letters

Generalizability of Deep Learning Tuberculosis Classifier to COVID-19 Chest Radiographs

New Tricks for an Old Algorithm?

Yi, Paul H. MD*,†; Kim, Tae Kyung BA*; Lin, Cheng Ting MD*,†

Author Information
doi: 10.1097/RTI.0000000000000532
  • Free

BRIEF INTRODUCTION

Amidst the COVID-19 pandemic, chest radiographs (CXR) have been proposed as a potentially useful tool for triage and disease progression monitoring. Although CXR is less sensitive than computed tomography (CT), a classic pattern for COVID-19 pneumonia on CXR would preclude the need for further imaging with CT. Deep learning (DL) approaches for COVID-19 detection on CXR have been proposed1,2; however, these studies have been limited by small numbers of images available for model training.

Although the lack of large COVID-19 image data sets is a barrier to DL development, the nonspecific findings of COVID-19 raise the possibility of repurposing CXRs with overlapping radiographic findings as training data. We previously used a similar approach to develop a DL algorithm to detect pulmonary tuberculosis (TB)3; this algorithm was trained using CXRs that did not have diagnoses of TB, but did have similar radiographic findings, in particular consolidation. On the basis of the observation that the CXR findings of COVID-19 and TB overlap considerably,4 we hypothesized that this model would generalize well to COVID-19 CXRs.

METHODS

We collected 88 frontal CXRs with confirmed COVID-19 diagnoses from 3 radiology repositories (Radiopaedia [https://radiopaedia.org]; RSNA [https://www.rsna.org/covid-19]; and SIRM [https://www.sirm.org/category/senza-categoria/covid-19/]). Images were confirmed by a PGY-5 resident and fellowship-trained cardiothoracic attending for visible parenchymal abnormality because not all COVID-19 cases will have visible CXR changes.

Each image was inputted into our prior TB detection DL model trained on >110,000 CXRs labeled in a semisupervised manner by a DL model developed using 11,000 images annotated for possible TB by a cardiothoracic radiologist,3 and which had an area under the receiver operating characteristic curve of 0.87 to 0.91 on external data sets from China and the United States. A “positive” prediction was made if the probability for abnormal CXR was >0.5. Class activation maps (CAM) were outputted to visually indicate areas most suspicious for disease.

RESULTS

Our algorithm correctly classified 78 of 88 COVID-19 CXRs as “positive” (89%). For correctly identified images, CAMs demonstrated appropriate disease localization for both focal parenchymal opacities (Fig. 1A) and multifocal disease (Fig. 1B). The 10 false-negative images all had isolated subtle areas of parenchymal disease (Figs. 1C, D).

FIGURE 1
FIGURE 1:
CAM heatmaps for COVID-19-positive CXRs. A, CXR (left panel) showing right basilar airspace opacities with corresponding, appropriately localized CAM heatmap (right panel). (Case courtesy of Dr Edgar Lorente, Radiopaedia.org, rID: 75189 under Creative Commons License.) B, CXR (left panel) showing patchy airspace opacities bilaterally with corresponding, appropriately localized CAM heatmap (right panel). (Case courtesy of Henri Vandermeulen, Radiopaedia.org, rID: 75417 under Creative Commons License.) C, Example of a false-negative prediction by our model. CXR shows faint left lower lobe opacities. (Case courtesy of Drs Maria Sole Prevedoni Gorone, and Francesco Ballati. Reprinted with permission from Italian Society of Medical Radiology and Interventional Radiology.5) D, Example of a false-negative prediction by our model. CXR shows hazy left basilar opacities. (Case courtesy of Drs Maria Sole Prevedoni Gorone and Francesco Ballati. Reprinted with permission from Italian Society of Medical Radiology and Interventional Radiology5).

COMMENT

COVID-19 is a novel infection; however, its CXR findings overlap with other pneumonias, with findings ranging from “ground-glass opacities” to consolidations.4 We hypothesized that a DL model trained to identify a different disease with similar findings could also identify COVID-19-associated pneumonia/acute lung injury. One caveat is that findings rarely seen in COVID-19, but common in TB (eg, lymphadenopathy), would be interpreted as positive by our algorithm. Although variable generalization of CXR DL pneumonia models has been reported,6 we found good generalization of our TB model toward COVID-19, despite it never having “seen” the disease. Our findings are consistent with those of Hurt et al,7 who recently reported generalizability of a generic pneumonia segmentation DL model to 10 COVID-19-positive CXRs from the literature. Altogether, these findings suggest utility of prior CXR DL models for COVID-19 diagnosis.

Our DL model’s CAMs appropriately localized abnormalities for both focal and multifocal disease. In addition to confirming appropriate abnormality identification, these heatmaps could provide a visual aid for nonradiologists and radiologists in training. Diagnostic tools to aid nonexpert radiologists may become particularly relevant as the pandemic overwhelms hospitals in both the developed world, where health care staffing is decreasing by mandates for social distancing and quarantining of workers who contract COVID-19, and the developing world, where few dedicated radiologists work at a baseline. In addition, if used as a triage tool, a DL model could help hasten isolation of patients with potential COVID-19 on CXR from others in waiting areas in the emergency department.

We note limitations to our findings. First, our input images consisted of only confirmed COVID-19 cases, while the performance metrics on negative cases (false-positive and true-negative rates) were tested using non-TB CXRs in our previous report.3 In addition, our model’s accuracy would likely change in a real-world setting, with accuracy expected to correlate with pretest probability (and therefore, could decrease). However, our goal was not to describe real-world performance, but rather to demonstrate the ability of a DL model that had never “seen” a case of COVID-19 to identify these cases. Second, our model lacks specificity, as it is essentially a pulmonary opacity identifier. Although it can identify COVID-positive CXRs, it would not be able to distinguish it from other diseases that cause airspace opacities, such as alveolar pulmonary edema, a task that a well-trained radiologist likely would be able to perform. Thus, a “positive” result could be helpful as a rule-in test to suggest a CT, as other pathologies that mimic COVID-19 may also require CT as part of their clinical workup. A recent meta-analysis showed that even CT has a low specificity of 37% for COVID-19, ultimately highlighting the need for radiologic patterns to be interpreted in the clinical context.8 Therefore, we submit these results as a proof-of-concept for a potential CXR screening method in clinical settings with high clinical suspicion for COVID-19, as well as a potential solution to the lack of COVID-19 imaging data for DL model development, with one study using only 68 positive CXRs.2

DL models could facilitate high-throughput CXR triage in overwhelmed emergency departments, point-of-care interpretation for nonradiologists on the frontline, and potential workload reduction for radiologists. On the basis of this proof-of-concept, we propose that algorithms trained to identify other pneumonias can be repurposed for COVID-19. Alternatively, preexisting CXR data sets with findings of COVID-19 can be used to overcome the obstacle of small data sets of COVID-19-positive CXRs for de novo DL model training.

REFERENCES

1. Wang L, Wong A. COVID-Net: a tailored deep convolutional neural network design for detection of COVID-19 cases from chest radiography images. 2020. Available at: http://arxiv.org/abs/2003.09871. Accessed April 1, 2020.
2. Farooq M, Hafeez A. COVID-ResNet: a deep learning framework for screening of COVID19 from radiographs. 2020. Available at: http://arxiv.org/abs/2003.14395. Accessed April 1, 2020.
3. Kim TK, Yi PH, Hager GD, et al. Refining dataset curation methods for deep learning-based automated tuberculosis screening. J Thorac Dis. 2019;12:2. Available at: http://jtd.amegroups.com/article/view/31214/pdf. Accessed October 29, 2019.
4. Wong HYF, Lam HYS, Fong AH-T, et al. Frequency and distribution of chest radiographic findings in COVID-19 positive patients. Radiology. 2019:201160.
5. Neri E, Miele V, Coppola F, et al. Use of CT and artificial intelligence in suspected or COVID-19 positive patients: statement of the Italian Society of Medical and Interventional Radiology. Radiol Med. 2020. In press.
    6. Zech JR, Badgeley MA, Liu M, et al. Variable generalization performance of a deep learning model to detect pneumonia in chest radiographs: a cross-sectional study. PLos Med. 2018;15:e1002683.
    7. Hurt B, Kligerman S, Hsiao A. Deep learning localization of pneumonia. J Thorac Imaging. 2020;35:W87–W89.
    8. Kim H, Hong H, Yoon SH. Diagnostic performance of CT and reverse transcriptase-polymerase chain reaction for Coronavirus Disease 2019: a meta-analysis. Radiology. 2020:201343.
    Keywords:

    COVID-19; coronavirus; pneumonia; deep learning; artificial intelligence

    Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.