Affiliation:
1. MIT Media Lab, USA. wbrannon@mit.edu
2. AWS AI Labs, USA. yvvirkar@amazon.com
3. AWS AI Labs, USA. brianjt@amazon.com
Abstract
Abstract
We investigate how humans perform the task of dubbing video content from one language into another, leveraging a novel corpus of 319.57 hours of video from 54 professionally produced titles. This is the first such large-scale study we are aware of. The results challenge a number of assumptions commonly made in both qualitative literature on human dubbing and machine-learning literature on automatic dubbing, arguing for the importance of vocal naturalness and translation quality over commonly emphasized isometric (character length) and lip-sync constraints, and for a more qualified view of the importance of isochronic (timing) constraints. We also find substantial influence of the source-side audio on human dubs through channels other than the words of the translation, pointing to the need for research on ways to preserve speech characteristics, as well as transfer of semantic properties such as emphasis and emotion, in automatic dubbing systems.
Subject
Artificial Intelligence,Computer Science Applications,Linguistics and Language,Human-Computer Interaction,Communication
Reference62 articles.
1. Findings of the IWSLT 2022 Evaluation Campaign;Anastasopoulos,2022
2. A robust self-learning method for fully unsupervised cross-lingual mappings of word embeddings;Artetxe,2018
3. Paul
Boersma
and DavidWeenink. 2022. Praat: Doing phonetics by computer [Computer program]. Version 6.2.14, retrieved July 6, 2022 from https://www.praat.org, https://www.fon.hum.uva.nl/praat/
4. Dubbing;Bosseaux,2018
5. End-to-end speaker segmentation for overlap-aware resegmentation;Bredin,2021
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献