Affiliation:
1. MIFT Department, University of Messina, 98122 Messina, Italy
Abstract
Nowadays, fine-tuning has emerged as a powerful technique in machine learning, enabling models to adapt to a specific domain by leveraging pre-trained knowledge. One such application domain is automatic speech recognition (ASR), where fine-tuning plays a crucial role in addressing data scarcity, especially for languages with limited resources. In this study, we applied fine-tuning in the context of atypical speech recognition, focusing on Italian speakers with speech impairments, e.g., dysarthria. Our objective was to build a speaker-dependent voice user interface (VUI) tailored to their unique needs. To achieve this, we harnessed a pre-trained OpenAI’s Whisper model, which has been exposed to vast amounts of general speech data. However, to adapt it specifically for disordered speech, we fine-tuned it using our private corpus including 65 K voice recordings contributed by 208 speech-impaired individuals globally. We exploited three variants of the Whisper model (small, base, tiny), and by evaluating their relative performance, we aimed to identify the most accurate configuration for handling disordered speech patterns. Furthermore, our study dealt with the local deployment of the trained models on edge computing nodes, with the aim to realize custom VUIs for persons with impaired speech.
Reference38 articles.
1. Gillen, G. (2016). Stroke Rehabilitation, Mosby. [4th ed.].
2. On the impact of dysarthric speech on contemporary ASR cloud platforms;Corno;J. Reliab. Intell. Environ.,2019
3. Ballati, F., Corno, F., and Russis, L.D. (2018). Assessing Virtual Assistant Capabilities with Italian Dysarthric Speech, Association for Computing Machinery.
4. Jaddoh, A., Loizides, F., Rana, O., and Syed, Y.A. (2024). Interacting with Smart Virtual Assistants for Individuals with Dysarthria: A Comparative Study on Usability and User Preferences. Appl. Sci., 14.
5. Disordered speech recognition considering low resources and abnormal articulation;Lin;Speech Commun.,2023