Scaling Up Sign Spotting Through Sign Language Dictionaries
-
Published:2022-04-05
Issue:6
Volume:130
Page:1416-1439
-
ISSN:0920-5691
-
Container-title:International Journal of Computer Vision
-
language:en
-
Short-container-title:Int J Comput Vis
Author:
Varol GülORCID, Momeni Liliane, Albanie Samuel, Afouras Triantafyllos, Zisserman Andrew
Abstract
AbstractThe focus of this work issign spotting–given a video of an isolated sign, our task is to identifywhetherandwhereit has been signed in a continuous, co-articulated sign language video. To achieve this sign spotting task, we train a model using multiple types of available supervision by: (1)watchingexisting footage which is sparsely labelled using mouthing cues; (2)readingassociated subtitles (readily available translations of the signed content) which provide additionalweak-supervision; (3)looking upwords (for which no co-articulated labelled examples are available) in visual sign language dictionaries to enable novel sign spotting. These three tasks are integrated into a unified learning framework using the principles of Noise Contrastive Estimation and Multiple Instance Learning. We validate the effectiveness of our approach on low-shot sign spotting benchmarks. In addition, we contribute a machine-readable British Sign Language (BSL) dictionary dataset of isolated signs,BslDict, to facilitate study of this task. The dataset, models and code are available at our project page.
Publisher
Springer Science and Business Media LLC
Subject
Artificial Intelligence,Computer Vision and Pattern Recognition,Software
Reference68 articles.
1. Afouras, T., Chung, J.S., & Zisserman, A. (2018). LRS3-TED: a large-scale dataset for visual speech recognition. arXiv preprint arXiv:1809.00496 2. Agris, U., Zieren, J., Canzler, U., Bauer, B., & Kraiss, K. F. (2008). Recent developments in visual sign language recognition. Universal Access in the Information Society, 6, 323–362. 3. Albanie, S., Varol, G., Momeni, L., Afouras, T., Bull, H., Chowdhury, H., Fox, N., Woll, B., Cooper, R., McParland, A., & Zisserman, A. (2021). BOBSL: BBC-Oxford british sign language dataset. arXiv preprint arXiv:2111.03635, https://www.robots.ox.ac.uk/~vgg/data/bobsl 4. Albanie, S., Varol, G., Momeni, L., Afouras, T., Chung, J.S., Fox, N., & Zisserman, A. (2020). BSL-1K: Scaling up co-articulated sign language recognition using mouthing cues. In ECCV 5. Aldersson, R., & McEntee-Atalianis, L. (2007). A lexical comparison of Icelandic sign language and Danish sign language. Birkbeck Studies in Applied Linguistics,2.
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|