Abstract
AbstractWith the rise of deep learning, spoken language understanding (SLU) for command-and-control applications such as a voice-controlled virtual assistant can offer reliable hands-free operation to physically disabled individuals. However, due to data scarcity, it is still a challenge to process dysarthric speech. Pre-training (part of) the SLU model with supervised automatic speech recognition (ASR) targets or with self-supervised learning (SSL) may help to overcome a lack of data, but no research has shown which pre-training strategy performs better for SLU on dysarthric speech and to which extent the SLU task benefits from knowledge transfer from pre-training with dysarthric acoustic tasks. This work aims to compare different mono- or cross-lingual pre-training (supervised and unsupervised) methodologies and quantitatively investigates the benefits of pre-training for SLU tasks on Dutch dysarthric speech. The designed SLU systems consist of a pre-trained speech representations encoder and a SLU decoder to map encoded features to intents. Four types of pre-trained encoders, a mono-lingual time-delay neural network (TDNN) acoustic model, a mono-lingual transformer acoustic model, a cross-lingual transformer acoustic model (Whisper), and a cross-lingual SSL Wav2Vec2.0 model (XLSR-53), are evaluated complemented with three types of SLU decoders: non-negative matrix factorization (NMF), capsule networks, and long short-term memory (LSTM) networks. The acoustic analysis of the four pre-trained encoders are tested on Dutch dysarthric home-automation data with word error rate (WER) results to investigate the correlations of the dysarthric acoustic task (ASR) and the semantic task (SLU). By introducing the intelligibility score (IS) as a metric of the impairment severity, this paper further quantitatively analyzes dysarthria-severity-dependent models for SLU tasks.
Funder
FWO-SBO grant
Flemish Government
China Scholarship Council
Publisher
Springer Science and Business Media LLC
Subject
Electrical and Electronic Engineering,Acoustics and Ultrasonics
Reference76 articles.
1. M. Jefferson, in Retrieved from the University of Minnesota Digital Conservancy, Usability of automatic speech recognition systems for individuals with speech disorders: past, present, future, and a proposed model (2019)
2. F. Ballati, F. Corno, L. De Russis, in Intelligent Environments 2018, "Hey Siri, do you understand me?": Virtual assistants and dysarthria. Rome, Italy: IOS Press (2018), pp. 557–566
3. E. Bastianelli, G. Castellucci, D. Croce, R. Basili, D. Nardi, Structured learning for spoken language understanding in human-robot interaction. Int. J. Robot. Res. 36(5–7), 660–683 (2017)
4. D. Woszczyk, S. Petridis, D. Millard, in Interspeech 2020, Domain adversarial neural networks for dysarthric speech recognition (International Speech Communication Association (ISCA), 2020), pp. 3875–3879
5. Y. Takashima, T. Takiguchi, Y. Ariki, in 2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), End-to-end dysarthric speech recognition using multiple databases. Brighton, United Kingdom: IEEE pp. 6395–6399
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献