Abstract
Abstract
This study presents a robust framework for the classification of brain tumors, beginning with meticulous data curation from 233 patients. The dataset comprises a diverse range of T1-weighted contrast-enhanced images, encompassing meningioma, glioma, and pituitary tumor types. Rigorous organization, pre-processing, and augmentation techniques are applied to optimize model training. The proposed self-adaptive model incorporates a cutting-edge algorithm, leveraging Adaptive Contrast Limited Histogram Equalization (CLAHE) and Self-Adaptive Spatial Attention. CLAHE enhances grayscale images by tailoring contrast to the unique characteristics of each region. The Self-Adaptive Spatial Attention, implemented through an Attention Layer, dynamically assigns weights to spatial locations, thereby enhancing sensitivity to critical brain regions. The model architecture integrates transfer learning models, including DenseNet169, DenseNet201, ResNet152, and InceptionResNetV2, contributing to its robustness. DenseNet169 serves as a feature extractor, capturing hierarchical features through pre-trained weights. Model adaptability is further enriched by components such as batch normalization, dropout, layer normalization, and an adaptive learning rate strategy, mitigating overfitting and dynamically adjusting learning rates during training. Technical details, including the use of the Adam optimizer and softmax activation function, underscore the model's optimization and multi-class classification capabilities. The proposed model, which amalgamates transfer learning and adaptive mechanisms, emerges as a powerful tool for brain tumor detection and classification in medical imaging. Its nuanced comprehension of brain tumor images, facilitated by self-adaptive attention mechanisms, positions it as a promising advancement in computer-aided diagnosis in neuroimaging. Leveraging DenseNet201 with a self-adaptive mechanism, the model surpasses previous methods, achieving an accuracy of 94.85%, precision of 95.16%, and recall of 94.60%, showcasing its potential for enhanced accuracy and generalization in the challenging realm of medical image analysis.
Publisher
Research Square Platform LLC