Abstract
With the popularity of the mobile internet, people all over the world can easily create and publish diverse media content such as multilingual and multi-dialectal audio and video. Therefore, language or dialect identification (LID) is increasingly important for practical applications such as multilingual and cross lingual processing as the front-end part of the subsequent tasks such as speech recognition and voice identification. This paper proposes a neural network framework based on a multiscale residual network (MSRN) and multi-headed self-attention (MHSA). Experimental results show that this method can effectively improve the accuracy and robustness compared to other methods. This model uses the MSRN to extract the language spectrogram feature and uses MHSA to filter useful features and suppress irrelevant features. Training and test sets are constructed from both the “Common Voice” and “Oriental Language Recognition” (AP17-OLR) datasets. The experimental results show that this model can effectively improve the accuracy and robustness of LID.
Funder
Strengthening Plan of the National Defense Science and Technology Foundation of China
Natural Science Foundation of China
Reference25 articles.
1. Cao, H.B., Zhao, J.M., and Qin, J. (2022, November 19). A Comparative Study of Multiple Methods for Oriental Language Identification. Available online: https://kns.cnki.net/kcms/detail/detail.aspx?dbcode=CPFD&dbname=CPFDLAST2018&filename=SEER201710001023&uniplatform=NZKPT&v=5Z10lhs1awqErVfB9k7dSEo5jDOKYebegcP8YjqKucFnRKP0s8c_7BHI6YaNf8tgq5EyMTbaW_w%3d.
2. Feature extraction methods LPC, PLP and MFCC in speech recognition;Dave;Int. J. Adv. Res. Eng. Technol.,2013
3. Srivastava, S., Nandi, P., Sahoo, G., and Chandra, M. (2014, January 20–21). Formant based linear prediction coefficients for speaker identification. Proceedings of the 2014 International Conference on Signal Processing and Integrated Networks (SPIN), Noida, India.
4. Revathi, A., and Jeyalakshmi, C. (2017, January 19–20). Robust speech recognition in noisy environment using perceptual features and adaptive filters. Proceedings of the 2017 2nd International Conference on Communication and Electronics Systems (ICCES), Coimbatore, India.
5. Automatic continuous speech recogniser for Dravidian languages using the auto associative neural network;Sangeetha;Int. J. Comput. Vis. Robot.,2016