Abstract
BACKGROUND
Deep learning through convolutional neural networks (CNN) has recently emerged as a top-performing machine-learning algorithm across various image classification tasks. OBJECTIVES
We propose a CNN approach that integrates multimodal MRI data, tumor volumetrics, and Karnofsky performance score (KPS) to predict overall survival (OS) in glioma patients with IDH wild-type mutation (IDHwt). METHODS
High-grade and low-grade IDHwt glioma patients were identified from The Cancer Genome Atlas. Corresponding multimodal MRI (T2, FLAIR, T1-pre and post contrast) were obtained from The Cancer Imaging Archives. A fully-automated algorithm was used to segment tumor margins and determine whole tumor volumes as well as lobe locations. Patients with were stratified into three groups based on OS: poor (1-to-6 months), average (6-to-24 months), high (>24 months). A CNN was used to integrate multimodal MR, tumor volume and location, and KPS to predict patient OS. The 3D CNN is based on a generative-adversarial network for semi-supervised learning utilizing feature-matching. Non-imaging data were integrated into the classifier by concatenation with imaging features in the penultimate layer. RESULTS
A total of 110 patients were analyzed (26 poor-survival, 61 average-survival, 23 high-survival). Single-factor ANOVA did not detect a significant difference in OS based on tumor volume, lobe location, or KPS parameters individually. However, integrated multimodal CNN accurately predicted survival cohort in 82% of patients by five-fold validation. Features most highly correlated with survival were identified. CONCLUSION
A deep learning algorithm integrating imaging and clinical data can predict OS in IDHwt glioma with 82% accuracy. Future work will validate this methodology prospectively.