Author:
Wang Tao,Lan Junlin,Han Zixin,Hu Ziwei,Huang Yuxiu,Deng Yanglin,Zhang Hejun,Wang Jianchao,Chen Musheng,Jiang Haiyan,Lee Ren-Guey,Gao Qinquan,Du Ming,Tong Tong,Chen Gang
Abstract
The application of deep learning in the medical field has continuously made huge breakthroughs in recent years. Based on convolutional neural network (CNN), the U-Net framework has become the benchmark of the medical image segmentation task. However, this framework cannot fully learn global information and remote semantic information. The transformer structure has been demonstrated to capture global information relatively better than the U-Net, but the ability to learn local information is not as good as CNN. Therefore, we propose a novel network referred to as the O-Net, which combines the advantages of CNN and transformer to fully use both the global and the local information for improving medical image segmentation and classification. In the encoder part of our proposed O-Net framework, we combine the CNN and the Swin Transformer to acquire both global and local contextual features. In the decoder part, the results of the Swin Transformer and the CNN blocks are fused to get the final results. We have evaluated the proposed network on the synapse multi-organ CT dataset and the ISIC 2017 challenge dataset for the segmentation task. The classification network is simultaneously trained by using the encoder weights of the segmentation network. The experimental results show that our proposed O-Net achieves superior segmentation performance than state-of-the-art approaches, and the segmentation results are beneficial for improving the accuracy of the classification task. The codes and models of this study are available at https://github.com/ortonwang/O-Net.
Reference55 articles.
1. Swin-unet: unet-like pure transformer for medical image segmentation;Cao,2021
2. Transunet: Transformers make strong encoders for medical image segmentation;Chen,2021
3. Deeplab: Semantic image segmentation with deep convolutional nets, atrous convolution, and fully connected crfs;Chen;IEEE Trans. Pattern Anal. Mach. Intell,2017
4. “Encoder-decoder with atrous separable convolution for semantic image segmentation,”;Chen,2018
5. “3d u-net: learning dense volumetric segmentation from sparse annotation,”;Çiçek,2016
Cited by
22 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献