Enhancing Voice Cloning Quality through Data Selection and Alignment-Based Metrics
-
Published:2023-07-10
Issue:14
Volume:13
Page:8049
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
González-Docasal Ander12ORCID, Álvarez Aitor1ORCID
Affiliation:
1. Fundación Vicomtech, Basque Research and Technology Alliance (BRTA), 20009 Donostia-San Sebastián, Spain 2. Department of Electronics, Engineering and Communications, University of Zaragoza, 50009 Zaragoza, Spain
Abstract
Voice cloning, an emerging field in the speech-processing area, aims to generate synthetic utterances that closely resemble the voices of specific individuals. In this study, we investigated the impact of various techniques on improving the quality of voice cloning, specifically focusing on a low-quality dataset. To contrast our findings, we also used two high-quality corpora for comparative analysis. We conducted exhaustive evaluations of the quality of the gathered corpora in order to select the most-suitable data for the training of a voice-cloning system. Following these measurements, we conducted a series of ablations by removing audio files with a lower signal-to-noise ratio and higher variability in utterance speed from the corpora in order to decrease their heterogeneity. Furthermore, we introduced a novel algorithm that calculates the fraction of aligned input characters by exploiting the attention matrix of the Tacotron 2 text-to-speech system. This algorithm provides a valuable metric for evaluating the alignment quality during the voice-cloning process. We present the results of our experiments, demonstrating that the performed ablations significantly increased the quality of synthesised audio for the challenging low-quality corpus. Notably, our findings indicated that models trained on a 3 h corpus from a pre-trained model exhibit comparable audio quality to models trained from scratch using significantly larger amounts of data.
Funder
Spanish Public Business Entity Red.es IANA project
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference54 articles.
1. The voice synthesis business: 2022 update;Dale;Nat. Lang. Eng.,2022 2. van den Oord, A., Dieleman, S., Zen, H., Simonyan, K., Vinyals, O., Graves, A., Kalchbrenner, N., Senior, A., and Kavukcuoglu, K. (2016, January 13–15). WaveNet: A Generative Model for Raw Audio. Proceedings of the 9th ISCA Workshop on Speech Synthesis Workshop (SSW 9), Sunnyvale, CA, USA. 3. González-Docasal, A., Álvarez, A., and Arzelus, H. (2022, January 14–16). Exploring the limits of neural voice cloning: A case study on two well-known personalities. Proceedings of the IberSPEECH 2022, Granada, Spain. 4. Lo, C.C., Fu, S.W., Huang, W.C., Wang, X., Yamagishi, J., Tsao, Y., and Wang, H.M. (2019, January 15–19). MOSNet: Deep Learning-Based Objective Assessment for Voice Conversion. Proceedings of the Interspeech 2019, Graz, Austria. 5. Cooper, E., Huang, W.C., Toda, T., and Yamagishi, J. (2022). Generalization Ability of MOS Prediction Networks. arXiv.
|
|