Author:
Jiao Jianbo,Alsharid Mohammad,Drukker Lior,Papageorghiou Aris T.,Zisserman Andrew,Noble J. Alison
Abstract
AbstractAuditory and visual signals are two primary perception modalities that are usually present together and correlate with each other, not only in natural environments but also in clinical settings. However, audio-visual modelling in the latter case can be more challenging, due to the different sources of audio/video signals and the noise (both signal-level and semantic-level) in auditory signals—usually speech audio. In this study, we consider audio-visual modelling in a clinical setting, providing a solution to learn medical representations that benefit various clinical tasks, without relying on dense supervisory annotations from human experts for the model training. A simple yet effective multi-modal self-supervised learning framework is presented for this purpose. The proposed approach is able to help find standard anatomical planes, predict the focusing position of sonographer’s eyes, and localise anatomical regions of interest during ultrasound imaging. Experimental analysis on a large-scale clinical multi-modal ultrasound video dataset show that the proposed novel representation learning method provides good transferable anatomical representations that boost the performance of automated downstream clinical tasks, even outperforming fully-supervised solutions. Being able to learn such medical representations in a self-supervised manner will contribute to several aspects including a better understanding of obstetric imaging, training new sonographers, more effective assistive tools for human experts, and enhancement of the clinical workflow.
Funder
Engineering and Physical Sciences Research Council
European Research Council
Publisher
Springer Science and Business Media LLC
Reference26 articles.
1. Arandjelovic, R. & Zisserman, A. Look, listen and learn. In Proceedings of the IEEE international conference on computer vision, 609–617 (2017).
2. Arandjelovic, R. & Zisserman, A. Objects that sound. In Proceedings of the European conference on computer vision (ECCV), 435–451 (2018).
3. Kazakos, E., Nagrani, A., Zisserman, A. & Damen, D. Epic-fusion: Audio-visual temporal binding for egocentric action recognition. In IEEE/CVF international conference on computer vision (ICCV) (2019).
4. Korbar, B., Tran, D. & Torresani, L. Cooperative learning of audio and video models from self-supervised synchronization. Advances in neural information processing systems (NeurIPS) (2018).
5. Morgado, P., Vasconcelos, N. & Misra, I. Audio-visual instance discrimination with cross-modal agreement. In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition, 12475–12486 (2021).