Improvement of Acoustic Models Fused with Lip Visual Information for Low-Resource Speech
Author:
Yu Chongchong1, Yu Jiaqi1ORCID, Qian Zhaopeng1ORCID, Tan Yuchen1
Affiliation:
1. School of Artificial Intelligence, Beijing Technology and Business University, Beijing 100048, China
Abstract
Endangered language generally has low-resource characteristics, as an immaterial cultural resource that cannot be renewed. Automatic speech recognition (ASR) is an effective means to protect this language. However, for low-resource language, native speakers are few and labeled corpora are insufficient. ASR, thus, suffers deficiencies including high speaker dependence and over fitting, which greatly harms the accuracy of recognition. To tackle the deficiencies, the paper puts forward an approach of audiovisual speech recognition (AVSR) based on LSTM-Transformer. The approach introduces visual modality information including lip movements to reduce the dependence of acoustic models on speakers and the quantity of data. Specifically, the new approach, through the fusion of audio and visual information, enhances the expression of speakers’ feature space, thus achieving the speaker adaptation that is difficult in a single modality. The approach also includes experiments on speaker dependence and evaluates to what extent audiovisual fusion is dependent on speakers. Experimental results show that the CER of AVSR is 16.9% lower than those of traditional models (optimal performance scenario), and 11.8% lower than that for lip reading. The accuracy for recognizing phonemes, especially finals, improves substantially. For recognizing initials, the accuracy improves for affricates and fricatives where the lip movements are obvious and deteriorates for stops where the lip movements are not obvious. In AVSR, the generalization onto different speakers is also better than in a single modality and the CER can drop by as much as 17.2%. Therefore, AVSR is of great significance in studying the protection and preservation of endangered languages through AI.
Funder
Ministry of Education Humanities and Social Sciences Research Planning Fund Project of China
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference43 articles.
1. Chen, Z., and Yang, H. (2020, January 12–14). Yi language speech recognition using deep learning methods. Proceedings of the 2020 IEEE 4th Information Technology, Networking, Electronic and Automation Control Conference (ITNEC), Chongqing, China. 2. Yu, C., Chen, Y., Li, Y., Kang, M., Xu, S., and Liu, X. (2019). Cross-language end-to-end speech recognition research based on transfer learning for the low-resource Tujia language. Symmetry, 11. 3. Using automatic alignment to analyze endangered language data: Testing the viability of untrained alignment;DiCanio;J. Acoust. Soc. Am.,2013 4. Maximum likelihood linear transformations for HMM-based speech recognition;Gales;Comput. Speech Lang.,1998 5. Povey, D., Ghoshal, A., Boulianne, G., Burget, L., Glembek, O., Goel, N., and Vesely, K. (2011, January 11–15). The Kaldi speech recognition toolkit. Proceedings of the IEEE 2011 Workshop on Automatic Speech Recognition and Understanding, IEEE Signal Processing Society (CONF), Waikoloa, HI, USA.
|
|