Predicting glioblastoma molecular subtypes and prognosis with a multimodal model integrating convolutional neural network, radiomics, and semantics

Author:

Zhong Sheng123,Ren Jia-Xin4,Yu Ze-Peng1,Peng Yi-Da5,Yu Cheng-Wei1,Deng Davy2,Xie YangYiran6,He Zhen-Qiang1,Duan Hao1,Wu Bo7,Li Hui8,Yang Wen-Zhuo1,Bai Yang9,Sai Ke1,Chen Yin-Sheng1,Guo Cheng-Cheng1,Li De-Pei1,Cheng Ye10,Zhang Xiang-Heng1,Mou Yong-Gao1

Affiliation:

1. Department of Neurosurgery and Neuro-Oncology, State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, China;

2. Department of Cancer Biology, Dana-Farber Cancer Institute, Boston, Massachusetts;

3. Department of Bioinformatics, Harvard Medical School, Boston, Massachusetts;

4. Department of Neurology, Stroke Center, The First Hospital of Jilin University, Changchun, China;

5. College of Computer Science and Technology, Jilin University, Changchun, China;

6. Vanderbilt University School of Medicine, Nashville, Tennessee;

7. Departments of Orthopaedics,

8. Neurology, and

9. Neurosurgery, The First Hospital of Jilin University, Changchun, China; and

10. Department of Neurosurgery, The Xuanwu Hospital Capital Medical University, Beijing, China

Abstract

OBJECTIVE The aim of this study was to build a convolutional neural network (CNN)–based prediction model of glioblastoma (GBM) molecular subtype diagnosis and prognosis with multimodal features. METHODS In total, 222 GBM patients were included in the training set from Sun Yat-sen University Cancer Center (SYSUCC) and 107 GBM patients were included in the validation set from SYSUCC, Xuanwu Hospital Capital Medical University, and the First Hospital of Jilin University. The multimodal model was trained with MR images (pre- and postcontrast T1-weighted images and T2-weighted images), corresponding MRI impression, and clinical patient information. First, the original images were segmented using the Multimodal Brain Tumor Image Segmentation Benchmark toolkit. Convolutional features were extracted using 3D residual deep neural network (ResNet50) and convolutional 3D (C3D). Radiomic features were extracted using pyradiomics. Report texts were converted to word embedding using word2vec. These three types of features were then integrated to train neural networks. Accuracy, precision, recall, and F1-score were used to evaluate the model performance. RESULTS The C3D-based model yielded the highest accuracy of 91.11% in the prediction of IDH1 mutation status. Importantly, the addition of semantics improved precision by 11.21% and recall in MGMT promoter methylation status prediction by 14.28%. The areas under the receiver operating characteristic curves of the C3D-based model in the IDH1, ATRX, MGMT, and 1-year prognosis groups were 0.976, 0.953, 0.955, and 0.976, respectively. In external validation, the C3D-based model showed significant improvement in accuracy in the IDH1, ATRX, MGMT, and 1-year prognosis groups, which were 88.30%, 76.67%, 85.71%, and 85.71%, respectively (compared with 3D ResNet50: 83.51%, 66.67%, 82.14%, and 70.79%, respectively). CONCLUSIONS The authors propose a novel multimodal model integrating C3D, radiomics, and semantics, which had a great performance in predicting IDH1, ATRX, and MGMT molecular subtypes and the 1-year prognosis of GBM.

Publisher

Journal of Neurosurgery Publishing Group (JNSPG)

Subject

Genetics,Animal Science and Zoology

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3