Shifting to machine supervision: annotation-efficient semi and self-supervised learning for automatic medical image segmentation and classification
-
Published:2024-05-11
Issue:1
Volume:14
Page:
-
ISSN:2045-2322
-
Container-title:Scientific Reports
-
language:en
-
Short-container-title:Sci Rep
Author:
Singh Pranav,Chukkapalli Raviteja,Chaudhari Shravan,Chen Luoyao,Chen Mei,Pan Jinqian,Smuda Craig,Cirrone Jacopo
Abstract
AbstractAdvancements in clinical treatment are increasingly constrained by the limitations of supervised learning techniques, which depend heavily on large volumes of annotated data. The annotation process is not only costly but also demands substantial time from clinical specialists. Addressing this issue, we introduce the S4MI (Self-Supervision and Semi-Supervision for Medical Imaging) pipeline, a novel approach that leverages advancements in self-supervised and semi-supervised learning. These techniques engage in auxiliary tasks that do not require labeling, thus simplifying the scaling of machine supervision compared to fully-supervised methods. Our study benchmarks these techniques on three distinct medical imaging datasets to evaluate their effectiveness in classification and segmentation tasks. Notably, we observed that self-supervised learning significantly surpassed the performance of supervised methods in the classification of all evaluated datasets. Remarkably, the semi-supervised approach demonstrated superior outcomes in segmentation, outperforming fully-supervised methods while using 50% fewer labels across all datasets. In line with our commitment to contributing to the scientific community, we have made the S4MI code openly accessible, allowing for broader application and further development of these methods. The code can be accessed at https://github.com/pranavsinghps1/S4MI.
Publisher
Springer Science and Business Media LLC
Reference23 articles.
1. Matsoukas, C., Haslum, J. F., Sorkhei, M., Söderberg, M. & Smith, K. What makes transfer learning work for medical images: Feature reuse & other factors. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 9225–9234 (2022). 2. Yang, X., He, X., Liang, Y., Yang, Y., Zhang, S. & Xie, P. Transfer learning or self-supervised learning? A tale of two pretraining paradigms. arXiv:2007.04234 (2020). 3. Chen, T., Kornblith, S., Norouzi, M., & Hinton, G. A simple framework for contrastive learning of visual representations. In International Conference on Machine Learning 1597–1607 (PMLR, 2020). 4. Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P. & Joulin, A. Emerging properties in self-supervised vision transformers. In Proceedings of the International Conference on Computer Vision (ICCV) (2021). 5. Xie, Z., Zhang, Z., Cao, Y., Lin, Y., Bao, J., Yao, Z., Dai, Q. & Hu, H. Simmim: A simple framework for masked image modeling. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition 9653–9663 (2022).
|
|