Foundation models in gastrointestinal endoscopic AI: Impact of architecture, pre-training approach and data efficiency
-
Published:2024-12
Issue:
Volume:98
Page:103298
-
ISSN:1361-8415
-
Container-title:Medical Image Analysis
-
language:en
-
Short-container-title:Medical Image Analysis
Author:
Boers Tim G.W.ORCID, Fockens Kiki N., van der Putten Joost A.ORCID, Jaspers Tim J.M.ORCID, Kusters Carolus H.J.ORCID, Jukema Jelmer B.ORCID, Jong Martijn R.ORCID, Struyvenberg Maarten R., de Groof Jeroen, Bergman Jacques J., de With Peter H.N., van der Sommen FonsORCID
Funder
Olympus Corporation Dutch Research Council
Reference49 articles.
1. Azizi, S., Mustafa, B., Ryan, F., Beaver, Z., Freyberg, J., Deaton, J., Loh, A., Karthikesalingam, A., Kornblith, S., Chen, T., et al., 2021. Big self-supervised models advance medical image classification. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 3478–3488. 2. Caron, M., Touvron, H., Misra, I., Jégou, H., Mairal, J., Bojanowski, P., Joulin, A., 2021. Emerging properties in self-supervised vision transformers. In: Proceedings of the IEEE/CVF International Conference on Computer Vision. pp. 9650–9660. 3. Improved baselines with momentum contrastive learning;Chen,2020 4. A simple framework for contrastive learning of visual representations;Chen,2020 5. Big self-supervised models are strong semi-supervised learners;Chen,2020
|
|