Abstract
AbstractEnd-to-end speech to text translation aims to directly translate speech from one language into text in another, posing a challenging cross-modal task particularly in scenarios of limited data. Multi-task learning serves as an effective strategy for knowledge sharing between speech translation and machine translation, which allows models to leverage extensive machine translation data to learn the mapping between source and target languages, thereby improving the performance of speech translation. However, in multi-task learning, finding a set of weights that balances various tasks is challenging and computationally expensive. We proposed an adaptive multi-task learning method to dynamically adjust multi-task weights based on the proportional losses incurred during training, enabling adaptive balance in multi-task learning for speech to text translation. Moreover, inherent representation disparities across different modalities impede speech translation models from harnessing textual data effectively. To bridge the gap across different modalities, we proposed to apply optimal transport in the input of end-to-end model to find the alignment between speech and text sequences and learn the shared representations between them. Experimental results show that our method effectively improved the performance on the Tibetan-Chinese, English-German, and English-French speech translation datasets.
Funder
National Natural Science Foundation of China
Publisher
Springer Science and Business Media LLC
Reference38 articles.
1. F.W.M. Stentiford, M.G. Steer, Machine translation of speech. Br. Telecom Technol. J. 6(2), 116–122 (1988)
2. A. Waibel, A.N. Jain, ICASSP 91: 1991 International Conference on Acoustics, Speech, and Signal Processing, JANUS: A speech-to-speech translation system using connectionist and symbolic processing strategies (Toronto, 1991), pp. 793–796
3. A. Bérard, O. Pietquin, C. Servan, Listen and translate: A proof of concept for end-to-end speech-to-text translation. CoRR. (2016). arXiv:1612.01744
4. L. Duong, A. Anastasopoulos, Proceedings of the 2016 conference of the North American chapter of the association for computational linguistics: Human language technologies, An Attentional Model for Speech Translation without Transcription (Association for Computational Linguistics (ACL), Stroudsburg, 2016), pp. 949–959
5. A. Kendall, Y. Gal, Proceedings of the IEEE conference on computer vision and pattern recognition, Multi-task learning using uncertainty to weigh losses for scene geometry and semantics (IEEE Computer Society, Washington, DC, 2018), pp. 7482–7491