Abstract
Abstract
The ability to distinguish between authentic and fake audio is become increasingly difficult due to the increasing accuracy of text-to-speech models, posing a serious threat to speaker verification systems. Furthermore, audio deepfakes are becoming a more likely source of deception with the development of sophisticated methods for producing synthetic voice. The ASVspoof dataset has recently been used extensively in research on the detection of audio deep fakes, together with a variety of machine and deep learning methods. The proposed work in this paper combines data augmentation techniques with hybrid feature extraction method at front-end. Two variants of audio augmentation method and Synthetic Minority Over Sampling Technique (SMOTE) have been used, which have been combined individually with Mel Frequency Cepstral Coefficients (MFCC), Gammatone Cepstral Coefficients (GTCC) and hybrid these two feature extraction methods for implementing front-end feature extraction. To implement the back-end our proposed work two deep learning models, Long Short-Term Memory (LSTM) and Gated Recurrent Unit (GRU), and two Machine Learning (ML) classifier Random Forest (RF) and Support Vector Machine (SVM) have been used. For training, and evaluation ASVspoof 2019 Logical Access (LA) partition, and for testing of the said systems, and ASVspoof 2021 deep fake partition have been used. After analysing the results, it can be observed that combination of MFCC+GTCC with SMOTE at front-end and LSTM at back-end has outperformed all other models with 99% test accuracy, and 1.6 % Equal Error Rate (EER) over deepfake partition. Also, the testing of this best combination has been done on DEepfake CROss-lingual (DECRO) dataset. To access the effectiveness of proposed model under noisy scenarios, we have analysed our best model under noisy condition by adding Babble Noise, Street Noise and Car Noise to test data.
Subject
Condensed Matter Physics,Mathematical Physics,Atomic and Molecular Physics, and Optics
Cited by
14 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献