Affiliation:
1. Department of Computer Science and Information Engineering, National Taichung University of Science and Technology, Taichung 404336, Taiwan
2. Department of Computer Science and Information Engineering, National United University, Miaoli 360302, Taiwan
Abstract
Recently, neural network technology has shown remarkable progress in speech recognition, including word classification, emotion recognition, and identity recognition. This paper introduces three novel speaker recognition methods to improve accuracy. The first method, called long short-term memory with mel-frequency cepstral coefficients for triplet loss (LSTM-MFCC-TL), utilizes MFCC as input features for the LSTM model and incorporates triplet loss and cluster training for effective training. The second method, bidirectional long short-term memory with mel-frequency cepstral coefficients for triplet loss (BLSTM-MFCC-TL), enhances speaker recognition accuracy by employing a bidirectional LSTM model. The third method, bidirectional long short-term memory with mel-frequency cepstral coefficients and autoencoder features for triplet loss (BLSTM-MFCCAE-TL), utilizes an autoencoder to extract additional AE features, which are then concatenated with MFCC and fed into the BLSTM model. The results showed that the performance of the BLSTM model was superior to the LSTM model, and the method of adding AE features achieved the best learning effect. Moreover, the proposed methods exhibit faster computation times compared to the reference GMM-HMM model. Therefore, utilizing pre-trained autoencoders for speaker encoding and obtaining AE features can significantly enhance the learning performance of speaker recognition. Additionally, it also offers faster computation time compared to traditional methods.
Funder
National Science and Technology Council (NSTC) of the Republic of China
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference34 articles.
1. Self-defined text-dependent wake-up-words speaker recognition system;Tsai;IEEE Access,2021
2. Mohammadi, M., and Sadegh Mohammadi, H.R. (2017, January 2–4). Robust features fusion for text independent speaker verification enhancement in noisy environments. Proceedings of the Iranian Conference on Electrical Engineering, Tehran, Iran.
3. Multi-source domain adaptation for text-independent forensic speaker recognition;Wang;IEEE/ACM Trans. Audio Speech Lang. Process.,2022
4. Forensic speaker recognition;Campbell;IEEE Signal Process. Mag.,2009
5. Speaker recognition by machines and humans: A tutorial review;Hansen;IEEE Signal Process. Mag.,2015
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献