Affiliation:
1. Indian Institute of Information Technology, Sri City/Chittoor 517646, India
2. Department of Electronic Systems, Aalborg University, 9220 Aalborg, Denmark
Abstract
Deep representation learning has gained significant momentum in advancing text-dependent speaker verification (TD-SV) systems. When designing deep neural networks (DNN) for extracting bottleneck (BN) features, the key considerations include training targets, activation functions, and loss functions. In this paper, we systematically study the impact of these choices on the performance of TD-SV. For training targets, we consider speaker identity, time-contrastive learning (TCL), and auto-regressive prediction coding, with the first being supervised and the last two being self-supervised. Furthermore, we study a range of loss functions when speaker identity is used as the training target. With regard to activation functions, we study the widely used sigmoid function, rectified linear unit (ReLU), and Gaussian error linear unit (GELU). We experimentally show that GELU is able to reduce the error rates of TD-SV significantly compared to sigmoid, irrespective of the training target. Among the three training targets, TCL performs the best. Among the various loss functions, cross-entropy, joint-softmax, and focal loss functions outperform the others. Finally, the score-level fusion of different systems is also able to reduce the error rates. To evaluate the representation learning methods, experiments are conducted on the RedDots 2016 challenge database consisting of short utterances for TD-SV systems based on classic Gaussian mixture model-universal background model (GMM-UBM) and i-vector methods.
Subject
Acoustics and Ultrasonics
Reference64 articles.
1. Time-Contrastive Learning Based Deep Bottleneck Features for Text-Dependent Speaker Verification;Sarkar;IEEE/ACM Trans. Audio Speech Lang. Process.,2019
2. Front-end factor analysis for speaker verification;Dehak;IEEE Trans. Audio Speech Lang. Process.,2011
3. Snyder, D., Garcia-Romero, D., Sell, G., Povey, D., and Khudanpur, S. (2018, January 15–20). X-Vectors: Robust DNN Embeddings for Speaker Recognition. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada.
4. Senoussaoui, M., Kenny, P., Brümmer, N., de Villiers, E., and Dumouchel, P. (2011, January 27–31). Mixture of plda models in I-vector space for gender-independent speaker recognition. Proceedings of the Interspeech, Florence, Italy.
5. Comparison of Parametric Representations for Monosyllabic Word Recognition in Continuously Spoken Sentences;Davis;IEEE Trans. Acoust. Speech Signal Process.,1980
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献