Combining the Transformer and Convolution for Effective Brain Tumor Classification Using MRI Images

Author:

Aloraini Mohammed1ORCID,Khan Asma2,Aladhadh Suliman3ORCID,Habib Shabana3ORCID,Alsharekh Mohammed F.1ORCID,Islam Muhammad4ORCID

Affiliation:

1. Department of Electrical Engineering, College of Engineering, Qassim University, Unaizah 56452, Saudi Arabia

2. Department of Computer Science, Islamia College Peshawar, Peshawar 25120, Pakistan

3. Department of Information Technology, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia

4. Department of Electrical Engineering, College of Engineering and Information Technology, Onaizah Colleges, Onaizah 56447, Saudi Arabia

Abstract

In the world, brain tumor (BT) is considered the major cause of death related to cancer, which requires early and accurate detection for patient survival. In the early detection of BT, computer-aided diagnosis (CAD) plays a significant role, the medical experts receive a second opinion through CAD during image examination. Several researchers proposed different methods based on traditional machine learning (TML) and deep learning (DL). The TML requires hand-crafted features engineering, which is a time-consuming process to select an optimal features extractor and requires domain experts to have enough knowledge of optimal features selection. The DL methods outperform the TML due to the end-to-end automatic, high-level, and robust feature extraction mechanism. In BT classification, the deep learning methods have a great potential to capture local features by convolution operation, but the ability of global features extraction to keep Long-range dependencies is relatively weak. A self-attention mechanism in Vision Transformer (ViT) has the ability to model long-range dependencies which is very important for precise BT classification. Therefore, we employ a hybrid transformer-enhanced convolutional neural network (TECNN)-based model for BT classification, where the CNN is used for local feature extraction and the transformer employs an attention mechanism to extract global features. Experiments are performed on two public datasets that are BraTS 2018 and Figshare. The experimental results of our model using BraTS 2018 and Figshare datasets achieves an average accuracy of 96.75% and 99.10%, respectively. In the experiments, the proposed model outperforms several state-of-the-art methods using BraTS 2018 and Figshare datasets by achieving 3.06% and 1.06% accuracy, respectively.

Publisher

MDPI AG

Subject

Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science

Cited by 7 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Da-resbigru -brain tumor classification using Dual attention residual bi directional gated recurrent unit using MRI images;Biomedical Signal Processing and Control;2024-02

2. NeuroNet19: an explainable deep neural network model for the classification of brain tumors using magnetic resonance imaging data;Scientific Reports;2024-01-17

3. Brain tumor segmentation and survival time prediction using graph momentum fully convolutional network with modified Elman spike neural network;International Journal of Imaging Systems and Technology;2024-01

4. Improving Non-Invasive Brain Tumor Categorization using Transformers on MRI Data;2023 International Conference on Digital Image Computing: Techniques and Applications (DICTA);2023-11-28

5. Brain Tumor Segmentation using Convolutional Neural Networks based Visual Geometry Group 19;2023 International Conference on Ambient Intelligence, Knowledge Informatics and Industrial Electronics (AIKIIE);2023-11-02

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3