Author:
Zhang Shiyu,Kong Jianguo,Chen Chao,Li Yabin,Liang Haijun
Abstract
The rise of end-to-end (E2E) speech recognition technology in recent years has overturned the design pattern of cascading multiple subtasks in classical speech recognition and achieved direct mapping of speech input signals to text labels. In this study, a new E2E framework, ResNet–GAU–CTC, is proposed to implement Mandarin speech recognition for air traffic control (ATC). A deep residual network (ResNet) utilizes the translation invariance and local correlation of a convolutional neural network (CNN) to extract the time-frequency domain information of speech signals. A gated attention unit (GAU) utilizes a gated single-head attention mechanism to better capture the long-range dependencies of sequences, thus attaining a larger receptive field and contextual information, as well as a faster training convergence rate. The connectionist temporal classification (CTC) criterion eliminates the need for forced frame-level alignments. To address the problems of scarce data resources and unique pronunciation norms and contexts in the ATC field, transfer learning and data augmentation techniques were applied to enhance the robustness of the network and improve the generalization ability of the model. The character error rate (CER) of our model was 11.1% on the expanded Aishell corpus, and it decreased to 8.0% on the ATC corpus.
Reference35 articles.
1. Hidden Markov Models for speech recognition;Technometrics,2012
2. Zweig, G., and Russell, S. (1998, January 26–30). Speech recognition with Dynamic Bayesian Networks. Proceedings of the AAAI-98: Fifteenth National Conference on Artificial Intelligence, Madison, WI, USA.
3. Automatic model complexity control using marginalized discriminative growth functions;IEEE Workshop Autom. Speech Recognit. Underst.,2007
4. Abe, A., Kazumasa, Y., and Seiichi, N. (2015, January 10). Robust speech recognition using DNN-HMM acoustic model combining noise-aware training with spectral subtraction. Proceedings of the 16th Annual Conference of the International-Speech-Communication-Association (INTERSPEECH 2015), Dresden, Germany.
5. Deep Neural Networks for Acoustic Modeling in Speech Recognition: The Shared Views of Four Research Groups;IEEE Signal Process. Mag.,2012
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献