Deep Learning AI Applications in the Imaging of Glioma : Topics in Magnetic Resonance Imaging

Secondary Logo

Journal Logo

Review Articles

Deep Learning AI Applications in the Imaging of Glioma

Zlochower, Avraham MD; Chow, Daniel S. MD; Chang, Peter MD; Khatri, Deepak MD; Boockvar, John A. MD; Filippi, Christopher G. MD

Author Information
Topics in Magnetic Resonance Imaging 29(2):p 115-00, April 2020. | DOI: 10.1097/RMR.0000000000000237
  • Free



Glioblastoma (GBM), classified as World Health Organization (WHO) grade IV tumor, remains the most common primary brain tumor and 80% of all primary malignant tumors.1 Standard therapy remains the Stupp protocol2 with maximal surgical resection followed by chemoradiation (temozolomide and radiation therapy), although Optune has been recently approved for frontline therapy.3 Treatment options narrow when there is recurrence and for which re-resection, anti-angiogenic therapy with Bevacizumab (Avastin), or clinical trials are typically offered. The prognosis for patients diagnosed with GBM remains grim despite many ongoing clinical trials with a median survival on average of 18 to 24 months.4 There are many factors that contribute to the lackluster efficacy of targeted therapeutic approaches such as the intrinsic aggressive nature of a high-grade neoplasm, persistence of perivascular niches of “persistor cancer cells,”5 the inherent, intratumoral, genetic variability of glioma, and the difficulty of chemotherapeutic agents to efficiently and effectively cross the blood-brain barrier. In fact, studies have shown that within an individual high-grade glial tumor, there is intratumoral variability with different portions of the tumor-expressing different genes,6,7 and, a particular treatment targeted to a particular mutation therefore may only be partially effective. As new treatments are targeted to the specific tumor mutations of a patient's glioma, an individualized, “precision medicine” approach to glioma may be conceivable, but this will be tempered by the reality of current clinical practice in which the best biopsies and surgical resections may not fully capture the genetic variability of the tumor on histopathologic examination.

With the advent of artificial intelligence in radiology, both machine learning and deep learning, there is opportunity to more fully capitalize on all the quantitative information within the millions of voxels of an MR image, which may allow for the classification and/or prediction of the genetic features of glioma that have such a crucial role in treatment management and prognostication. Genomic sequencing is not routinely or widely available albeit decreasing in cost overtime. A cheaper, more widely available technology such as MRI-coupled artificial intelligence algorithms that can detect image patterns accurately predicting genetic markers of glioma would be a significant benefit to neuroradiology and neuro-oncology practice.

Deep learning is a subset of machine learning and artificial intelligence in which the processes of feature selection from images and classification happens concurrently in one algorithm and which eliminates the need for human intervention during training. It is, in essence, end-to-end machine learning. ImageNet competitions that seek to use AI algorithms to correctly classify animals and objects8–11 have shown that convolutional neural networks (CNNs) have consistently outperformed all competitors, which partly explains the current popularity of CNNs and a shift away from machine learning in the digital imaging field. Using the quantitative information in all the voxels on a preoperative brain MR for glioma using, for example T1, T2, FLAIR, postcontrast imaging T1, diffusion-weighted imaging, and perfusion MR sequences is a “big data” mathematical problem for which a CNN may be the best approach, and many other tissue contrasts are available. Given enough high-quality, data, a CNN will learn and determine the radiologic features and relative importance needed to make a predictive model that can accurately classify an image.4 Many recent research articles are reporting remarkable success in the use of deep learning CNNs to accurately predict the status of 1p19q codeletion, IDH1 mutation, and MGMT promoters in glioma as well as tumor grade and long-term survival.


Accurate grading of glial tumors is crucial for patient management. Decisions regarding the extent of surgical resection, need for adjuvant therapy, and overall patient outcomes are largely driven by glial tumor grading. However, even with histologic tissue analysis, grading these tumors can be challenging.12 Therefore, deep learning and CNNs have great potential in neuro-oncology to improve the accuracy in grading gliomas both histologically and perhaps more importantly by imaging.

There have been several studies focusing specifically on grading glial tumors using CNN. Ahammed Muneer et al13 compared two different artificial intelligence systems, Weighted Neighbor Distance using Compound Hierarchy of Algorithms Representing Morphology (WNDCHRM) and VGG-19 deep CNN, in their ability to classify gliomas. In this study, 20 patients with known WHO grade I, II, III, or IV gliomas were classified using the aforementioned methods. The accuracy was higher with using VGG-19 CNN than wndchrm (98.25% vs 92.86%).13 Despite the relatively small sample size, this study confirms the potential of CNN in grading gliomas. Although Ahammed Muneer et al13 focused on classifying each glioma into 1 of 4 grades, Ge C et al,14 focused on distinguishing between low-grade (which was defined as WHO II) and high-grade glioma (WHO III and IV). In their study, Ge C et al,14 proposed a novel multistream CNN and fusion network for glioma classification. Using a dataset of patients obtained from the MICCAI BraTS 2017 competition, multiple sensors (T1 postcontrast, T2, and FLAIR images) from patients with low-grade and high-grade gliomas was obtained and placed in its own CNN. The aggregate data were then fused with the relevant features extracted. In this manner, they were able to achieve an accuracy of 90.87% by utilizing 3 different data points. The postcontrast T1-weighted images were overall the most accurate in distinguishing high versus low-grade glioma on an individual basis. In another study by Yang et al,15 they compared 2 different CNN, AlexNet and GoogLeNet, in their ability to distinguish between lower grade gliomas (which they defined as WHO II and III) and high-grade gliomas (WHO IV). Using T1 postcontrast images from 113 patients with pathologically proven gliomas, they compared the accuracy from these 2 CNNs when trained from scratch and using pretrained CNNs with fine tuning. The results demonstrate superior accuracy using the pretrained CNNs with GoogLeNet achieving the highest accuracy of 94.5%.15

The use of CNN for glioma classification is not just an active research area in imaging, but in pathology as well. Ertosun and Rubin12 utilized histopathologic images obtained from The Cancer Genome Atlas (TCGA) databank in patients with lower grade gliomas (which they defined as WHO II and III) and high-grade gliomas (WHO IV). Using an ensemble of CNNs, they were able to distinguish between high-grade gliomas and lower grade gliomas with an accuracy of 96% and between WHO grade II and III with an accuracy of 71%.12 It is important to point out a discrepancy in comparing these studies. Ertosun and Rubin12 and Yang et al15 defined “low grade glioma” as WHO II and III, while Ge C et al14 defined WHO III as high grade. As a result, direct comparison between these studies is limited as a result of this discrepancy. However, it is of interest to note that Ertosun and Rubin12 had a significantly worse accuracy distinguishing between WHO II and III as compared to WHO II/III versus IV (71% vs 96%). Thus, additional research attempting to differentiate WHO II and III may be a potential future area of research to consider.


Isocitrate dehydrogenase (IDH) is an enzyme of the Krebs cycle, which is a part of the energy metabolic pathway of cells. Normally with IDH wt (wild-type) glioma, there is an accumulation of alpha-ketoglutarate from isocitrate, but with IDH mutations, the isocitrate becomes 2-hydroxyglutarate. Having IDH1 or IDH2 mutations is associated with improved survival,16 as these gliomas respond better to Temozolomide therapy. Furthermore, in the new 2016 WHO classification system, both low-grade astroctyomas and oligodendrogliomas are classified by the presence of IDH 1 and IDH 2 mutations as well as loss of portions of chromosomes 1 and 19, known as 1p19q codeletion.17 IDH-mutant gliomas demonstrate lower regional cerebral blood volume and flow on MR perfusion, higher apparent diffusion coefficients on diffusion MR imaging, and improved survival.18,19 In a study by Beiko et al,20 resection of nonenhancing tumor after gross total resection of the enhancing component correlated with improvements in progression-free survival in both WHO III and IV gliomas as opposed to IDH wild-type tumors, and thus, knowledge of IDH mutation status before surgical resection may be important.

In a study by Liang et al,21 a multimodal 3D DenseNet to predict IDH using the publicly available BraTS 2017 database that included axial T1, postcontrast T1, T2, and FLAIR images from 102 GBM patients and 65 low-grade glioma (LGG) patients achieved 84.6% accuracy with sensitivity 78.5%, specificity 88%, and AUC 85.7%.18 To optimize the performance of the model, patient age, sex, and tumor grade were included. A significant limitation to this study is the inclusion of grade III tumors into the low-grade glioma category (WHO II), as grade III tumors are high-grade neoplasms.21 In a study by Li et al,22 a deep learning-based radiomics (DLR) approach was developed to predict IDH1 mutation in a cohort of 151 patients with WHO II low-grade glioma, and the modified CNN had an AUC of 95% when using DLR with additional axial FLAIR and postcontrast T1 MR images compared with radiomics alone (86% AUC). Lack of independent external validation, no clear methodology on measures to prevent overfitting, and the use of pathologic diagnoses no longer accepted (eg, oligoastrocytoma) limit the significance of these results.22 A residual CNN trained on axial FLAIR, T2, T1 pre-contrast, and T1 postcontrast images to predict IDH mutation status in a multicenter study by Chang et al23 showed accuracies that ranged from 82.8% to 85.7% on the training, validation, and testing models that improved with the addition of clinical data (patient age) to ranges of 87.3% to 89.1%. A major limitation to their approach was the lack of drop out or regularization to improve model performance, but a relative strength was the use of logistic regression models based on age as IDH mutations are more commonly seen in younger patients with glioma.23

A couple of recent manuscripts have achieved outstanding results. Chang et al24 used a novel 2D/3D hybrid CNN with 259 cases of LGG and high-grade glioma from the TCIA achieving an accuracy of 94%. They used principal component analysis to reduce features that were highly correlated with one another in order to determine which features had the largest impact on the final classification for a particular mutation status.24 Absent or minimal enhancement, central areas with low T1 and FLAIR signal, and well-defined tumor margins were the features that mattered most to prediction of IDH mutation status (Fig. 1), whereas Liang et al21 reported that metrics derived from the T2-weighted sequence were the best IDH predictors.24 In a recent study by Yogananda et al,25 a 3D-Dense-UNet CNN trained with 94 cases of IDH mutation and 120 wild-type gliomas from the Tumor Cancer Imaging Archive (TCIA) and The Cancer Genome Atlas (TCGA) using 3-fold cross validation and multiparametric MR images achieved 98% sensitivity, 97% specificity, with AUC of 99%.

2D/3D hybrid CNN predicting IDH mutation status with 94% accuracy. The 4 images on the left show IDH wild type and the 4 images on the right show IDH mutation in which the IDH mutation GBMs show minimal or no contrast enhancement, which was determined on principal component analysis (PCA) to be a key feature that the model used to make its prediction along with well-marginated tumor borders and cysts that were T1 and FLAIR hypointense.


Other than a single manuscript reporting slight frontal lobe predilection, there are no consistent MR imaging features that can reliably and accurately predict 1p19q codeletion tumors,26 and there are a paucity of manuscripts using CNNs to predict 1p19q codeletion. In one of the more successful studies, Chang et al24 used a 2D/3D hybrid to predict 1p19q codeletion status achieving an accuracy of 92%. In a recent study by Ge C et al,14 a novel multistream deep CNN (a 7-layer 2D CNN) achieved an accuracy of 89.39% on a cohort of 159 cases using contrast-enhanced T1 MR and T2-weighted MRs and data augmentation.14 Another study by Akkus et al,27 including 159 patients and using a multiscale CNN, had 87.70% accuracy for the prediction of 1p19q codeletion status. Only the manuscript by Chang et al24 attempted to peak into the “black box” of the CNN by employing principal component analysis to reduce features that were highly correlated to each other. Frontal lobe location, ill-defined tumor borders, and larger amounts of contrast were the most important features that predicted 1p19q codeletion status (Fig. 2).

2D/3D hybrid CNN predicting 1p19q Codeletion status with an accuracy of 92% in which the 4 images on the left show non-1p19q Codeletion and the 4 images on the right show 1p19q codeletion. PCA used to determine that ill-defined tumor margins and greater amounts of enhancement were key imaging features that the CNN used to make its predictions.


Hypermethylation of O-6-methylguanine-DNA-methyltransferease (MGMT) promoter, an enzyme that mediates DNA damage and dealkylates DNA, and patients with this mutation respond better to temozolomide chemotherapy and have improved prognosis.28 The development of pseudoprogression (PsP) is strongly associated with this mutation in which MR images look worse (disease progression) about 3 months following chemoradiation, but the patient is actually responding to therapy.28,29

There has been modest success using CNNs to predict MGMT mutation status in glioma on preoperative MR images. In a study by Han et al,30 they used a bi-directional recurrent CNN on 260 patients acquired from the TCIA and TCGA using T1, T2, and FLAIR scans only but no postcontrast T1 scans that obtained an accuracy of 67% on the validation set and 62% on the test data. Inclusion of post-contrast imaging could have potentially improved their model accuracy. Interestingly, this work allowed interaction of end-users with the CNN to compare different filters and layers of the model,30 which may be a way to tailor models more effectively and enlist greater engagement of neuroradiologists in the incorporation of CNNs into clinical practice. In a study by Korfiatis et al,31 they used and compared 3 different residual CNNs to predict MGMT status on 155 brain MR examinations with no tumor segmentation pre-processing and achieved accuracies of 94.90%, 80.72%, and 75.75% respectively. In a recent work by Chang et al,24 using 256 brain MRs from the TCGA and TCIA datasets, they had an accuracy of 83% in a novel 2D-3D hybrid CNN for prediction of the MGMT promoter status. Again, in this paper, the use of principal component analysis for dimensionality reduction determined that the most important imaging features for prediction of MGMT status included heterogeneous and nodular enhancement, presence of eccentric cysts, more masslike T2/FLAIR signal with cortical involvement, and slight tendency toward frontal and temporal lobe locations24 (Fig. 3). These findings corroborate findings from prior MR genomics studies where tumors with MGMT mutations are more likely to have eccentric and/or necrotic cysts and frontal lobe predominance.14,22,23,27

2D/3D hybrid CNN predicting MGMT promoter status with accuracy of 83%. Four images on the left are non-MGMT tumors and the 4 on the right have the MGMT promoter. PCA analysis suggested that more heterogeneous enhancement, eccentric cysts, and masslike abnormal T2/FLAIR signal, particularly cortical, were key features used by the CNN to make its model predictions.


It is difficult to reliably differentiate true progression (TP) from PsP on MR imaging, and this remains a difficult dilemma in patient management that has utilized many strategies from MR perfusion to watchful waiting. With the development of newer immunotherapies, inflammatory responses have emerged with complex signal characteristics which adds to the difficulty in distinguishing PsP from TP. Many cases of PsP are not reliably diagnosed using RANO criteria,32 and a recent meta-analysis suggested that up to 36% of cases are underdiagnosed.33 Jang et al34 used a hybrid deep and machine learning CNN-LSTM (long short-term memory) technique on GBM patients from a couple of institutions to classify PsP from TP achieving an AUC of 0.83. Lack of “ground truth” or histopathologically proven cases and an insufficient number of well-curated, annotated MR images of PsP cases may explain the relative absence of CNN manuscripts devoted to prediction of PsP, which remains a critical, unmet need in neuro-oncology.


Independent risk factors that portend poor overall survival (OS) in patients with GBM include male gender, older age at the time of diagnosis (age >60 years), poor preoperative Karnofsky scores of <70, which is a clinical metric of functional status, Caucasian ethnicity, advanced tumor with partial resection, and surgery without adjuvant chemoradiation.35–37 In a study by LaCroix et al,38 independent predictors of OS in GBM patients not only confirmed the importance of older age and Karnofsky Performance Scale scores but also included MR-derived imaging features, including the extent of resection, degree of necrosis, and enhancement on preoperative imaging. Additional MR studies have shown that both nonenhancing tumor and areas of infiltration are good predictors of OS,39 and poor OS correlated with higher regional cerebral blood volume (rCBV) and EGFRvIII amplification (a marker of neo-angiogenesis) in patients with GBM.40,41

In a hybrid machine learning (ML)-deep learning study by Sun et al, 3D CNN was used for tumor segmentation followed by a ML program that extracted radiomics features using a decision tree regression model42 that examined 210 high-grade and 75 low-grade gliomas from the BraTS (Brain Tumor Segmentation) 2018 dataset with 66 unknown cases and using T1, T2, FLAIR, and T1 postcontrast images to predict OS in GBM patients achieving a modest 61% accuracy on short-term (<10 months), mid-term (between 10 and 15 months), and long-term survivors (>15 months).42 In a similar approach, a study by Nie et al used a 3D CNN to automatically extract features from multi-modal preoperative brain MR images (T1, functional MRI, and diffusion tensor imaging) of high-grade gliomas (69 patients with either WHO III or IV tumors) to train a support vector machine (SVM) model to predict long versus short-term OS in GBM patients with less than 22 months being short-term and greater than 22 months being long-term survival.43 Clinical features were added to the SVM model, including age at diagnosis, gender, tumor location, tumor size, and WHO grade, and greatest accuracy was obtained using the 3D CNN with the SVM and hand-crafted features at nearly 90%.43 Subsequently, this research group performed the same study but swapping out the functional MR data with axial resting state connectivity maps (rs-fMRI) and the same hand-crafted clinical features that achieved similar accuracy at 90.66%.44 The study by Lie et al44 has several important limitations, including lack of independent test sets, MR examinations from a single institution using the same MR scanner from one vendor, arbitrary designation of 650 days as a cut-off between good versus poor survival, absence of genomic data (IDH, EGFR, and MGMT status) and failure to include extent of tumor resection (partial vs gross total resection), and type and duration of treatment. In an abstract by Chang et al,45 using clinical features (tumor location, age, and Karnofsky scores) combined with a 2D/3D hybrid CNN, the model classified good survival (greater than 24 months) from poor survival (less than 6 months) with 82% accuracy (Fig. 4). There was no exploration to discern which imaging features were most salient in OS prediction. In a recent study by Lao et al,46 a deep learning-based radiomics model with transfer learning on 112 patients using T1, T2, FLAIR, and T1 postcontrast images and clinical data (age and Karnofsky scores) achieved a modest accuracy of 71%.

2D/3D hybrid CNN predicting poor (<6 months) versus good survival (>24 months). The 4 images on the left were classified as good survival, while the 4 images on the right were classified as poor survival with an accuracy of 82% using clinical information in the model (age, tumor location, and Karnofsky scores).


Despite recent hype, the implementation of AI algorithms into clinical neuroimaging practice is not yet routine and faces several challenges. Large, well-annotated data sets are needed to train, validate, and test CNNs, which is costly and time-consuming because radiologists are more valuable to the health care enterprise reading clinical cases. Many diseases including glioma are thankfully relatively rare so the acquisition of a large amount of data is a real problem. To compete with the private sector buying up medical data, a culture of multi-institutional data sharing may be needed with a more horizontal and less paternalistic, hierarchical academic culture. Large-scale standardization of imaging protocols may be needed, as even good CNNs may underperform when prospectively tested on independent, external, “real world” data. Additional challenges include the expense and lack of widely available “ground truth” genomics and the need for potentially greater amounts of tissue sampling and/or biopsy even in cases of gross total resection to better characterize MR imaging in relation to CNN predictions using techniques such as principal component analysis for dimensionality reduction, because this may potentially inform neuroradiologists about image features. Finally, thoughtful integration of CNNs into routine clinical workflow whether on-sight or in-cloud solutions will need “buy-in” from neuroradiologists. Too many clicks on a mouse or disruption to workflow would be a death knoll for CNN deployment.

In summary, tumor grading and prediction of IDH mutation, 1p19q codeletion, and MGMT promoter status, and OS are being achieved with good success by CNNs, and it is likely that the accuracies of prediction exceeding 80% to 90% in many cases already surpass human level performance. With newer coder architecture, large-scale data sharing, and CNNs combined with clinical data from the EMR, performances are likely to improve. At that time, neuroradiologists are more likely to embrace artificial intelligence when it seems more like augmented intelligence that enhances their ability to make more precise, impactful diagnoses.


1. Ostrom QT, Gittleman H, Fulop J, et al. CBTRUS statistical report: primary brain and central nervous system tumors diagnosed in the United States in 2008–2012. Neuro Oncol 2015; 17: (suppl 4): iv1–iv62.
2. Stupp R, Mason WP, van der Bent MJ, et al. Radiotherapy plus concomitant and adjuvant temozolomide for glioblastoma. N Engl J Med 2005; 352:987–996.
3. Fabian D, Guillermo Prieto Eibl MDP, Alnahhas J, et al. Treatment of gliomablastom (GBM) with the addition of tumor treating fields (TFF): a review. Cancers 2019; 11:E174.
4. Chow DS, Chang P, Weinberg B, et al. Imaging genetic heterogeneity in glioblastoma. AJR Am J Roentgen 2018; 210:30–38.
5. Sattiraju A, Mintz A. Pericytes in glioblastoma: multifaceted role within tumor microenvironments and potential for therapeutic interventions. Adv Exp Med Biol 2019; 1147:65–91.
6. Patel AP, Tirosh I, Trombetta JJ, et al. Single-cell RNA-seq highlights intratumoral heterogeneity of primary glioblastoma. Science 2014; 344:1396–1401.
7. Sottoriva A, Spiteri J, Piccirillo SG, et al. Intratumoral heterogeneity in human glioblastoma reflects cancer evolutionary dynamics. Proc Natl Acad Sci U S A 2013; 110:4009–4014.
8. Le Cun Y, Bengio Y, Hinton G. Deep learning. Nature 2015; 521:436–444.
9. Simonyan K, Vedaldi A, Zisserman A. Deep Inside Convolutional Neural Networks: Visualizing Image Classfication Models and Saliency Maps. Available at Accessed April 19, 2014.
10. He K, Zhang X, Ren S, Sun J. Deep Residual Learning for Image Recognition. Available at Accessed December 10, 2015.
11. He K, Zhang X, Ren S, et al. Deep Residual Learning for Image Recognition. Available at: Accessed December 10, 2015.
12. Ertosun MG, Rubin DL. Automated grading of gliomas using deep learning in digital pathology images: a modular approach with ensemble of convolutional neural networks. AMIA Ann Symp Proc 2015; 2015:1899–1908.
13. Ahammed Muneer KV, Rajendran VR, Paul JK. Glioma tumor grade identification using artificial intelligence techniques. J Med Syst 2019; 43:113.
14. Ge C, Gu IY, Jakola AS, Yang J. Deep Learning and Multi-Sensor Fusion for Glioma Classification Using Multistream 2D Convolutional Networks. Abstract in Proceedings of International Conference of the IEEE Engineering in Medicine and Biology Society (EMBC), Honolulu, HI, 2018, pp. 5894–5897.
15. Yang Y, Yan LF, Zhang X, et al. Glioma grading on conventional MR images: a deep learning study with transfer learning. Front Neurosci 2018; 12:804.
16. Ducray F, Idbaih A, Wang XW, et al. Predictive and prognostic factors for glioma. Expert Rev Anticancer Ther 2011; 11:781–789.
17. Louis DN, Perry A, Reifenberger G, et al. The 2016 World Health Organization Classification of tumors of the central nervous system: a summary. Acta Neuropathol 2016; 131:803–820.
18. Kickengereder P, Sahm F, Radbruch A, et al. IDH mutation status is associated with a distinct hypoxia/angiogenesis transcriptome which is non-invasively predicable with rCBV imaging in human glioma. Sci Rep 2015; 5:16238.
19. Law M, Young RJ, Babb JS, et al. Gliomas: predicting time to progression or survival with cerebral blood volume measurements at dynamic susceptibility-weighted contrast-enhanced perfusion MR imaging. Radiology 2008; 247:490–498.
20. Beiko J, Suki D, Hess KR, et al. IDH mutant malignant astrocytomas are more amenable to surgical resection and have a survival benefit associated with maximal surgical resection. Neuro Oncol 2014; 16:81–91.
21. Liang S, Zhang R, Liang D, et al. Multimodal 3D DenseNet for IDH genotype prediction in gliomas. Genes 2018; 9:1–17.
22. Li Z, Wang Y, Yu J, et al. Deep learning based radiomics (DLR) and its usage in noninvasive IDH1 prediction for low grade glioma. Sci Rep 2017; 7:5467.
23. Chang K, Bai HX, Zhou H, et al. Residual convolutional neural networks for determination of IDH status in low- and high grade gliomas from MR imaging. Clin Cancer Res 2018; 24:1073–1081.
24. Chang P, Grinband J, Weinberg BD, et al. Deep learning convolutional neural networks accurately classify genetic mutations in glioma. AJNR Am J Neuroradiol 2018; 39:1201–1207.
25. Bangalore Yogananda CG, Shah BR, Vejdani-Jahromi M, et al. A novel fully automated MRI-based deep learning method for classification of IDH mutation status in brain gliomas. Neuro Oncol 2020; 22:402–411.
26. Xiong J, Tan W, Wen J, et al. Combination of diffusion tensor imaging and conventional MRI correlates with isocitrate dehydrogenase1/2 mutations but not 1p19q genotyping in oligodenroglial tumors. Eur Radiol 2016; 26:1705–1715.
27. Akkus Z, Ali I, Sedlar J, et al. Predicting deletion of chromosomal arms of 1p/19q in low-grade glioma from MR images using machine intelligence. J Digit Imaging 2017; 30:469–476.
28. Hegi ME, Diserens AC, Gorlia T, et al. MGMT gene silencing and benefit from temozolomide in glioblastoma. N Eng J Med 2005; 352:997–1003.
29. Gorlia T, van den Bent MJ, Hegi ME, et al. Nomograms for predicting survival of patients with newly diagnosed glioblastoma: prognostic factor analysis of EORTC and NCIC trial 26981-22981/CE.3. Lancet Oncol 2008; 9:29–38.
30. Han L, Kamdar MR. MRI to MGMT: predicting methylation status in glioblastoma using convolutional recurrent neural networks. Pac Symp Biocomput 2018; 23:331–342.
31. Korfiatis P, Kline TL, Lachance DH, et al. Residual deep convolutional neural network predicts MGMT methylation status. J Digit Imaging 2017; 30:622–628.
32. Nasseri M, Gahramanov S, Netto JP, et al. Evaluation of pseudoprogression in patients with gliomblastoma multiforme using dynamic magnetic resonance imaging with ferumoxytol calls RANO criteria into question. Neuro Oncol 2014; 16:1146–1154.
33. Abbasi AW, Westerlan HE, Holtman GA, et al. Incidence of tumor progression and pseudoprogression in high grade gliomas: a systematic review and meta-analysis. Clin Neuoradiol 2018; 28:401–411.
34. Jang BS, Jeon SH, Kim IH, et al. Predictor of pseudoprogression versus progression using machine learning algorithm in glioblastoma. Sci Rep 2018; 8:12516.
35. Wang J, Hu G, Quan X. Analysis of the factors affecting the prognosis of glioblastoma patients. Open Med 2019; 14:331–335.
36. Tian M, Ma W, Chen Y, et al. Impact of gender on the survival of patients with glioblastoma. Biosci Rep 2018; 38:1–9.
37. Thumma SR, Fairbanks RK, Lamoureux WT, et al. Effect of pretreatment clinical factors on overall survival in glioblastoma multiforme: a Surveillance Epidemiology and End Results (SEER) population analysis. World J Surg Oncol 2012; 10:75.
38. LaCroix M, Abi-Said D, Fourney DR, et al. A multivariate analysis of 416 patients with glioblastoma multiforme: prognosis, extent of resection, and survival. J Neurosurg 2001; 95:190–198.
39. Pope WB, Sayre J, Perlina A, et al. MR imaging correlates of survival in patients with high grade glioma. AJNR Am J Neuroradiol 2005; 26:2466–2474.
40. Jain R, Poisson L, Narang J, et al. Genomic mapping and survival prediction in glioblastoma: molecular subclassification strengthened by hemodynamic imaging biomarkers. Radiology 2013; 267:212–220.
41. Jain R, Poisson LM, Gutman D, et al. Outcome prediction in patients with glioblastoma by using imaging, clinical, and genomic biomarkers: focus on the nonenhancing component of the tumor. Radiology 2014; 272:484–493.
42. Li S, Zhang S, Chen H, et al. Brain tumor segmentation and survival prediction using multimodal MRI scans with deep learning. Front Neurosci 2019; 13:1–8.
43. Nie D, Lu J, Zhang H, et al. Multi-channel 3D deep feature learning for survival time prediction of brain tumor patients using multi-modal neuroimages. Sci Rep 2019; 9:1103.
44. Nie D, Zhang H, Adeli E, et al. 3D deep learning for multi-modal imaging-guided survival time prediction of brain tumor patients. Med Image Comput Comput Assist Interv 2016; 9901:212–220.
45. Chang P, Maffie J, Lignelli A, et al. Deep Learning and Glioma Radiogenomics: A TCIA/TCGA Project. Abstract in Proceedings of the Annual American Society of Neuroradiology (ASNR) Meeting. Long Beach, California 2017.
46. Lao J, Chen Y, Li Z, et al. A deep learning-based radiomics model for prediction of survival in GBM. Sci Rep 2017; 7:10303.

artificial intelligence; deep learning; genomics; glioma; MRI

Copyright © 2020 Wolters Kluwer Health, Inc. All rights reserved.