Robust Automated Tumour Segmentation Network Using 3D Direction-Wise Convolution and Transformer
-
Published:2024-05-09
Issue:
Volume:
Page:
-
ISSN:2948-2933
-
Container-title:Journal of Imaging Informatics in Medicine
-
language:en
-
Short-container-title:J Digit Imaging. Inform. med.
Author:
Chu Ziping, Singh SonitORCID, Sowmya Arcot
Abstract
AbstractSemantic segmentation of tumours plays a crucial role in fundamental medical image analysis and has a significant impact on cancer diagnosis and treatment planning. UNet and its variants have achieved state-of-the-art results on various 2D and 3D medical image segmentation tasks involving different imaging modalities. Recently, researchers have tried to merge the multi-head self-attention mechanism, as introduced by the Transformer, into U-shaped network structures to enhance the segmentation performance. However, both suffer from limitations that make networks under-perform on voxel-level classification tasks, the Transformer is unable to encode positional information and translation equivariance, while the Convolutional Neural Network lacks global features and dynamic attention. In this work, a new architecture named TCTNet Tumour Segmentation with 3D Direction-Wise Convolution and Transformer) is introduced, which comprises an encoder utilising a hybrid Transformer-Convolutional Neural Network (CNN) structure and a decoder that incorporates 3D Direction-Wise Convolution. Experimental results show that the proposed hybrid Transformer-CNN network structure obtains better performance than other 3D segmentation networks on the Brain Tumour Segmentation 2021 (BraTS21) dataset. Two more tumour datasets from Medical Segmentation Decathlon are also utilised to test the generalisation ability of the proposed network architecture. In addition, an ablation study was conducted to verify the effectiveness of the designed decoder for the tumour segmentation tasks. The proposed method maintains a competitive segmentation performance while reducing computational effort by 10% in terms of floating-point operations.
Funder
University of New South Wales
Publisher
Springer Science and Business Media LLC
Reference48 articles.
1. Yang, R., Yu, Y.: Artificial convolutional neural network in object detection and semantic segmentation for medical imaging analysis. Frontiers in oncology 11, 638182 (2021) 2. Limkin, E.J., Reuzé, S., Carré, A., Sun, R., Schernberg, A., Alexis, A., Deutsch, E., Ferté, C., Robert, C.: The complexity of tumor shape, spiculatedness, correlates with tumor radiomic shape features. Scientific reports 9(1), 1–12 (2019) 3. Fingeret, M.C., Teo, I., Epner, D.E.: Managing body image difficulties of adult cancer patients: lessons from available research. Cancer 120(5), 633–641 (2014) 4. Shi, Z., Miao, C., Schoepf, U.J., Savage, R.H., Dargis, D.M., Pan, C., Chai, X., Li, X.L., Xia, S., Zhang, X., et al: A clinically applicable deep-learning model for detecting intracranial aneurysm in computed tomography angiography images. Nature communications 11(1), 6090 (2020) 5. Long, J., Shelhamer, E., Darrell, T.: Fully convolutional networks for semantic segmentation. In: Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pp. 3431–3440 (2015)
|
|