U-Net_dc: A Novel U-Net-Based Model for Endometrial Cancer Cell Image Segmentation
-
Published:2023-06-28
Issue:7
Volume:14
Page:366
-
ISSN:2078-2489
-
Container-title:Information
-
language:en
-
Short-container-title:Information
Author:
Ji Zhanlin12ORCID, Yao Dashuang1ORCID, Chen Rui3, Lyu Tao3, Liao Qinping3, Zhao Li4, Ganchev Ivan256ORCID
Affiliation:
1. Hebei Key Laboratory of Industrial Intelligent Perception, North China University of Science and Technology, Tangshan 063210, China 2. Telecommunications Research Centre (TRC), University of Limerick, V94 T9PX Limerick, Ireland 3. Changgeng Hospital, Institute for Precision Medicine, Tsinghua University, Beijing 100084, China 4. Beijing National Research Center for Information Science and Technology, Institute for Precision Medicine, Tsinghua University, Beijing 100084, China 5. Department of Computer Systems, University of Plovdiv “Paisii Hilendarski”, 4000 Plovdiv, Bulgaria 6. Institute of Mathematics and Informatics—Bulgarian Academy of Sciences, 1040 Sofia, Bulgaria
Abstract
Mutated cells may constitute a source of cancer. As an effective approach to quantifying the extent of cancer, cell image segmentation is of particular importance for understanding the mechanism of the disease, observing the degree of cancer cell lesions, and improving the efficiency of treatment and the useful effect of drugs. However, traditional image segmentation models are not ideal solutions for cancer cell image segmentation due to the fact that cancer cells are highly dense and vary in shape and size. To tackle this problem, this paper proposes a novel U-Net-based image segmentation model, named U-Net_dc, which expands twice the original U-Net encoder and decoder and, in addition, uses a skip connection operation between them, for better extraction of the image features. In addition, the feature maps of the last few U-Net layers are upsampled to the same size and then concatenated together for producing the final output, which allows the final feature map to retain many deep-level features. Moreover, dense atrous convolution (DAC) and residual multi-kernel pooling (RMP) modules are introduced between the encoder and decoder, which helps the model obtain receptive fields of different sizes, better extract rich feature expression, detect objects of different sizes, and better obtain context information. According to the results obtained from experiments conducted on the Tsinghua University’s private dataset of endometrial cancer cells and the publicly available Data Science Bowl 2018 (DSB2018) dataset, the proposed U-Net_dc model outperforms all state-of-the-art models included in the performance comparison study, based on all evaluation metrics used.
Funder
National Key Research and Development Program of China Tsinghua Precision Medicine Foundation Bulgarian National Science Fund Telecommunications Research Centre (TRC) of University of Limerick, Ireland
Subject
Information Systems
Reference43 articles.
1. CNN Paradigm;Chua;IEEE Trans. Circuits Syst. I Fundam. Theory Appl.,1993 2. Low-dose CT with a residual encoder-decoder convolutional neural network;Chen;IEEE Trans. Med. Imaging,2017 3. Qassim, H., Verma, A., and Feinzimer, D. (2018, January 27). Compressed residual-VGG16 CNN model for big data places image recognition. Proceedings of the 2018 IEEE 8th Annual Computing and Communication Workshop and Conference (CCWC), Las Vegas, NV, USA. 4. He, K., Zhang, X., Ren, S., and Sun, J. (July, January 26). Deep residual learning for image recognition. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Las Vegas, NV, USA. 5. Liu, Z., Lin, Y., Cao, Y., Hu, H., Wei, Y., Zhang, Z., Lin, S., and Guo, B. (2021, January 10–17). Swin transformer: Hierarchical vision transformer using shifted windows. Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|