Language agnostic missing subtitle detection
-
Published:2022-06-11
Issue:1
Volume:2022
Page:
-
ISSN:1687-4722
-
Container-title:EURASIP Journal on Audio, Speech, and Music Processing
-
language:en
-
Short-container-title:J AUDIO SPEECH MUSIC PROC.
Author:
Gupta HoneyORCID, Sharma Mayank
Abstract
AbstractSubtitles are a crucial component of Digital Entertainment Content (DEC such as movies and TV shows) localization. With ever increasing catalog (≈ 2M titles) and localization expansion (30+ languages), automated subtitle quality checks becomes paramount. Being a manual creation process, subtitles can have errors such as missing transcriptions, out-of-sync subtitle blocks with the audio and incorrect translations. Such erroneous subtitles result in an unpleasant viewing experience and impact the viewership. Moreover, manual correction is laborious, highly costly and requires expertise of audio and subtitle languages. A typical subtitle correction process consists of (1) linear watch of the movie, (2) identification of time stamps associated with erroneous subtitle blocks, and (3) correcting procedure. Among the three, time taken to watch the entire movie by a human expert is the most time consuming step. This paper discusses the problem of missing transcription, where the subtitle blocks corresponding to some speech segments in the DEC are non-existent. We present a solution to augment human correction process by automatically identifying the timings associated with the non-transcribed dialogues in a language agnostic manner. The correction step can then be performed by either human-in-the-loop mechanism or automatically using neural transcription (speech-to-text in same language) and translation (text-to-text in different languages) engines. Our method uses a language agnostic neural voice activity detector (VAD) and an audio classifier (AC) trained explicitly on DEC corpora for better generalization. The method consists of three steps: first, we use VAD to identify the timings associated with dialogues (predicted speech blocks). Second, we refine those timings using the AC module by removing the timings associated with the leading and trailing non-speech segments identified as speech by VAD. Finally, we compare the predicted dialogue timings to the dialogue timings present in the subtitle file (subtitle speech blocks) and flag the missing transcriptions. We empirically demonstrate that the proposed method (a) reduces incorrect predicted missing subtitle timings by 10%, (b) improves the predicted missing subtitle timings by 2.5%, (c) reduces false positive rate (FPR) of overextending the predicted timings by 77%, and (d) improves the predicted speech block-level precision by a 119% over VAD baseline on a human-annotated dataset of missing subtitle speech blocks.
Publisher
Springer Science and Business Media LLC
Subject
Electrical and Electronic Engineering,Acoustics and Ultrasonics
Reference36 articles.
1. L. Mateju, P. Cerva, J. Zdánský, J. Málek, in 2017 IEEE International Conference on Acoustics, Speech and Signal Processing, ICASSP 2017, New Orleans, LA, USA, March 5-9, 2017. Speech activity detection in online broadcast transcription using deep neural networks and weighted finite state transducers, (2017), pp. 5460–5464. https://doi.org/10.1109/ICASSP.2017.7953200. 2. I. Jang, C. Ahn, J. Seo, Y. Jang, in Interspeech 2017, 18th Annual Conference of the International Speech Communication Association, Stockholm, Sweden, August 20-24, 2017. Enhanced feature extraction for speech detection in media audio, (2017), pp. 479–483. http://www.isca-speech.org/archive/Interspeech_2017/abstracts/0792.html. Accessed 24 June 2021. 3. X. Zhang, D. Wang, Boosting contextual information for deep neural network based voice activity detection. IEEE ACM Trans. Audio Speech Lang. Process.24(2), 252–264 (2016). https://doi.org/10.1109/TASLP.2015.2505415. 4. I. Hwang, H. Park, J. Chang, Ensemble of deep neural networks using acoustic environment classification for statistical model-based voice activity detection. Comput. Speech Lang.38:, 1–12 (2016). https://doi.org/10.1016/j.csl.2015.11.003. 5. K. Choi, G. Fazekas, M. B. Sandler, in Proceedings of the 17th International Society for Music Information Retrieval Conference, ISMIR 2016, New York City, United States, August 7-11, 2016, ed. by M. I. Mandel, J. Devaney, D. Turnbull, and G. Tzanetakis. Automatic tagging using deep convolutional neural networks, (2016), pp. 805–811. https://wp.nyu.edu/ismir2016/wp-content/uploads/sites/2294/2016/07/009_Paper.pdf. Accessed 24 June 2021.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|