Abstract
AbstractTarget speaker separation aims to separate the speech components of the target speaker from mixed speech and remove extraneous components such as noise. In recent years, deep learning-based speech separation methods have made significant breakthroughs and have gradually become mainstream. However, these existing methods generally face problems with system latency and performance upper limits due to the large model size. To solve these problems, this paper proposes improvements in the network structure and training methods to enhance the model’s performance. A lightweight target speaker separation network based on long-short-term memory (LSTM) is proposed, which can reduce the model size and computational delay while maintaining the separation performance. Based on this, a target speaker separation method based on joint training is proposed to achieve the overall training and optimization of the target speaker separation system. Joint loss functions based on speaker registration and speaker separation are proposed for joint training of the network to further improve the system’s performance. The experimental results show that the lightweight target speaker separation network proposed in this paper has better performance while being lightweight, and joint training of the target speaker separation network with our proposed loss function can further improve the separation performance of the original model.
Funder
National Nature Science Foundation of China
Beijing Natural Science Foundation
Publisher
Springer Science and Business Media LLC
Subject
Electrical and Electronic Engineering,Acoustics and Ultrasonics
Reference43 articles.
1. E.C. Cherry, Some experiments on the recognition of speech, with one and with two ears. J. Acoust. Soc. Am. 25(5), 975–979 (1953)
2. D.L. Wang, J. Chen, Supervised speech separation based on deep learning: an overview. IEEE Trans. Audio Speech Lang. Process. 26(10), 1702–1726 (2018)
3. A. Canziani, A. Paszke, E. Culurciello, An analysis of deep neural network models for practical applications (2016). http://arxiv.org/abs/1605.07678
4. W.S. Noble, What is a support vector machine? Nat. Biotechnol. 24(12), 1565–1567 (2006)
5. D.L. Wang, On ideal binary mask as the computational goal of auditory scene analysis. In: Speech Separation by Humans and Machines, pp. 181–197. (Springer, Boston, 2005)