A Simple Unsupervised Knowledge-Free Domain Adaptation for Speaker Recognition
-
Published:2024-01-26
Issue:3
Volume:14
Page:1064
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Lin Wan12, Li Lantian3, Wang Dong1
Affiliation:
1. Center for Speech and Language Technologies, BNRist, Tsinghua University, Beijing 100084, China 2. College of Management, Shenzhen University, Shenzhen 518055, China 3. School of Artificial Intelligence, Beijing University of Post Telecommunications, Beijing 100876, China
Abstract
Despite the great success of speaker recognition models based on deep neural networks, deploying a pre-trained model in real-world scenarios often leads to significant performance degradation due to the domain mismatch between training and testing conditions. Various adaptation methods have been developed to address this issue by modifying either the front-end embedding network or the back-end scoring model. However, existing methods typically rely on knowledge of the network, scoring model, or even the source data. In this study, we introduce a knowledge-free adaptation approach that only necessitates the unlabeled target data. Our core concept is based on the assumption that domain mismatch primarily stems from distributional distortion in the embedding space, such as shifting, rotation, and scaling while maintaining inter-speaker discrimination for data from unknown domains. Building on this assumption, we propose clustering LDA (C-LDA), a full-rank linear discriminant analysis (LDA) based on agglomerative hierarchical clustering (AHC) to compensate for this distortion. This approach does not need any human labels and does not rely on any knowledge of the model in the source domain, making it suitable for real-world applications. Theoretical analysis indicates that with cosine scoring, C-LDA is capable of eliminating distributional distortion related to global shift and within-speaker covariance rotation and scaling. Surprisingly, our experiments demonstrated that this simple approach can outperform more complex methods that require full or partial knowledge, including front-end approaches such as fine-tuning and distribution alignment, and back-end approaches such as unsupervised probabilistic linear discriminant analysis (PLDA) adaptation. Additional experiments demonstrated that C-LDA is insensitive to hyperparameters and works well in both multi-domain and single-domain adaptation scenarios.
Funder
National Natural Science Foundation of China Fundamental Research Funds for the Central Universities of China
Reference57 articles.
1. Li, C., Ma, X., Jiang, B., Li, X., Zhang, X., Liu, X., Cao, Y., Kannan, A., and Zhu, Z. (2017). Deep speaker: An end-to-end neural speaker embedding system. arXiv. 2. Snyder, D., Garcia-Romero, D., Povey, D., and Khudanpur, S. (2017, January 20–24). Deep neural network embeddings for text-independent speaker verification. Proceedings of the INTERSPEECH, Stockholm, Sweden. 3. Snyder, D., Garcia-Romero, D., Sell, G., Povey, D., and Khudanpur, S. (2018, January 15–20). X-vectors: Robust dnn embeddings for speaker recognition. Proceedings of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Calgary, AB, Canada. 4. Desplanques, B., Thienpondt, J., and Demuynck, K. (2020). ECAPA-TDNN: Emphasized channel attention, propagation and aggregation in TDNN based speaker verification. arXiv. 5. Zhou, T., Zhao, Y., and Wu, J. (2021, January 19–22). Resnext and res2net structures for speaker verification. Proceedings of the 2021 IEEE Spoken Language Technology Workshop (SLT), Shenzhen, China.
|
|