Semi-Supervised Medical Image Segmentation Based on Deep Consistent Collaborative Learning
-
Published:2024-05-14
Issue:5
Volume:10
Page:118
-
ISSN:2313-433X
-
Container-title:Journal of Imaging
-
language:en
-
Short-container-title:J. Imaging
Author:
Zhao Xin1, Wang Wenqi1ORCID
Affiliation:
1. College of Information Engineering, Dalian University, Dalian 116622, China
Abstract
In the realm of medical image analysis, the cost associated with acquiring accurately labeled data is prohibitively high. To address the issue of label scarcity, semi-supervised learning methods are employed, utilizing unlabeled data alongside a limited set of labeled data. This paper presents a novel semi-supervised medical segmentation framework, DCCLNet (deep consistency collaborative learning UNet), grounded in deep consistent co-learning. The framework synergistically integrates consistency learning from feature and input perturbations, coupled with collaborative training between CNN (convolutional neural networks) and ViT (vision transformer), to capitalize on the learning advantages offered by these two distinct paradigms. Feature perturbation involves the application of auxiliary decoders with varied feature disturbances to the main CNN backbone, enhancing the robustness of the CNN backbone through consistency constraints generated by the auxiliary and main decoders. Input perturbation employs an MT (mean teacher) architecture wherein the main network serves as the student model guided by a teacher model subjected to input perturbations. Collaborative training aims to improve the accuracy of the main networks by encouraging mutual learning between the CNN and ViT. Experiments conducted on publicly available datasets for ACDC (automated cardiac diagnosis challenge) and Prostate datasets yielded Dice coefficients of 0.890 and 0.812, respectively. Additionally, comprehensive ablation studies were performed to demonstrate the effectiveness of each methodological contribution in this study.
Funder
The National Natural Science Foundation of China
Reference41 articles.
1. Wang, J., Wei, L., Wang, L., Zhou, Q., Zhu, L., and Qin, J. (October, January 27). Boundary-aware transformers for skin lesion segmentation. Proceedings of the Medical Image Computing and Computer Assisted Intervention—MICCAI 2021: 24th International Conference, Strasbourg, France. Proceedings, Part I 24. 2. Hatamizadeh, A., Tang, Y., Nath, V., Yang, D., Myronenko, A., Landman, B., Roth, H.R., and Xu, D. (2022, January 3–8). Unetr: Transformers for 3D medical image segmentation. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA. 3. Xu, G., Zhang, X., He, X., and Wu, X. (2023, January 13–15). Levit-unet: Make faster encoders with transformer for medical image segmentation. Proceedings of the Chinese Conference on Pattern Recognition and Computer Vision (PRCV), Xiamen, China. 4. Su, J., Luo, Z., Lian, S., Lin, D., and Li, S. (2024). Consistency learning with dynamic weighting and class-agnostic regularization for semi-supervised medical image segmentation. Biomed. Signal Process. Control, 90. 5. Self-paced and self-consistent co-training for semi-supervised image segmentation;Wang;Med. Image Anal.,2021
|
|