A 3D Grouped Convolutional Network Fused with Conditional Random Field and Its Application in Image Multi-target Fine Segmentation
-
Published:2022-02-21
Issue:1
Volume:15
Page:
-
ISSN:1875-6883
-
Container-title:International Journal of Computational Intelligence Systems
-
language:en
-
Short-container-title:Int J Comput Intell Syst
Author:
Yin Jian, Zhou Zhibo, Xu ShaohuaORCID, Yang Ruiping, Liu Kun
Abstract
AbstractAiming at the utilization of adjacent image correlation information in multi-target segmentation of 3D image slices and the optimization of segmentation results, a 3D grouped fully convolutional network fused with conditional random fields (3D-GFCN) is proposed. The model takes fully convolutional network (FCN) as the image segmentation infrastructure, and fully connected conditional random field (FCCRF) as the post-processing tool. It expands the 2D convolution into 3D operations, and uses a shortcut-connection structure to achieve feature fusion of different levels and scales, to realizes the fine-segmentation of 3D image slices. 3D-GFCN uses 3D convolution kernel to correlate the information of 3D image adjacent slices, uses the context correlation and probability exploration mechanism of FCCRF to optimize the segmentation results, and uses the grouped convolution to reduce the model parameters. The dice loss that can ignore the influence of background pixels is used as the training objective function to reduce the influence of the imbalance quantity between background pixels and target pixels. The model can automatically study and focus on target structures of different shapes and sizes in the image, highlight the salient features useful for specific tasks. In the mechanism, it can improve the shortcomings and limitations of the existing image segmentation algorithms, such as insignificant morphological features of the target image, weak correlation of spatial information and discontinuous segmentation results, and improve the accuracy of multi-target segmentation results and learning efficiency. Take abdominal abnormal tissue detection and multi-target segmentation based on 3D computer tomography (CT) images as verification experiments. In the case of small-scale and unbalanced data set, the average Dice coefficient is 88.8%, the Class Pixel Accuracy is 95.3%, and Intersection of Union is 87.8%. Compared with other methods, the performance evaluation index and segmentation accuracy are significantly improved. It shows that the proposed method has good applicability for solving typical multi-target image segmentation problems, such as boundary overlap, offset deformation and low contrast.
Funder
Shandong University of Science and Technology Research Fund
Publisher
Springer Science and Business Media LLC
Subject
Computational Mathematics,General Computer Science
Reference35 articles.
1. Minaee, S., Boykov, Y.Y., Porikli, F., Plaza, A.J., Kehtarnavaz, N., Terzopoulos, D.: Image segmentation using deep learning: a survey. IEEE Trans. Pattern Anal. Mach. Intell 2. Chen, C., Qin, C., Qiu, H., Tarroni, G., Duan, J., Bai, W., Rueckert, D.: Deep learning for cardiac image segmentation: a review. Front. Cardiovasc. Med. 7, 25 (2020) 3. Zhou, Z., Siddiquee, M.M.R., Tajbakhsh, N., Liang, J.: Unet++: A nested u-net architecture for medical image segmentation, In: Deep Learning in Medical Image Analysis and Multimodal Learning for Clinical Decision Support. Springer, pp. 3–11, (2018) 4. Gu, Z., Cheng, J., Fu, H., Zhou, K., Hao, H., Zhao, Y., Zhang, T., Gao, S., Liu, J.: Ce-net: context encoder network for 2d medical image segmentation. IEEE Trans. Med. Imaging 38(10), 2281–2292 (2019) 5. Milletari, F., Navab, N., Ahmadi, S.-A.: V-net: Fully convolutional neural networks for volumetric medical image segmentation, In: 2016 Fourth International Conference on 3D Vision (3DV), IEEE, pp. 565–571 (2016)
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|