Author:
Bargum Anders R.,Serafin Stefania,Erkut Cumhur
Abstract
Research on deep learning-powered voice conversion (VC) in speech-to-speech scenarios are gaining increasing popularity. Although many of the works in the field of voice conversion share a common global pipeline, there is considerable diversity in the underlying structures, methods, and neural sub-blocks used across research efforts. Thus, obtaining a comprehensive understanding of the reasons behind the choice of the different methods included when training voice conversion models can be challenging, and the actual hurdles in the proposed solutions are often unclear. To shed light on these aspects, this paper presents a scoping review that explores the use of deep learning in speech analysis, synthesis, and disentangled speech representation learning within modern voice conversion systems. We screened 628 publications from more than 38 venues between 2017 and 2023, followed by an in-depth review of a final database of 130 eligible studies. Based on the review, we summarise the most frequently used approaches to voice conversion based on deep learning and highlight common pitfalls. We condense the knowledge gathered to identify main challenges, supply solutions grounded in the analysis and provide recommendations for future research directions.
Reference130 articles.
1. Voice conversion through vector quantization;Abe;ICASSP-88., Int. Conf. Acoust. Speech, Signal Process.,1988
2. Effects of sinusoidal model on non-parallel voice conversion with adversarial learning;Al-Radhi;Appl. Sci.,2021
3. Scoping studies: towards a methodological framework;Arksey;Int. J. Soc. Res. Methodol.,2005
4. StarGAN-ZSVC: towards zero-shot voice conversion in low-resource contexts;Baas;Proc. South. Afr. Conf. AI Res. (SACAIR) (Muldersdrift, South Afr.),2020
5. Gan you hear me? reclaiming unconditional speech synthesis from diffusion models;Baas,2023