Abstract
This article introduces a novel approach for the identification of deep fake threats within audio streams, specifically targeting the detection of synthetic speech generated by text-to-speech (TTS) algorithms. At the heart of this system are two critical components: the Vocal Emotion Analysis (VEA) Network, which captures the emotional nuances expressed within speech, and the Supervised Classifier for Deepfake Detection, which utilizes the emotional features extracted by the VEA to distinguish between authentic and fabricated audio tracks. The system capitalizes on the nuanced deficit of deepfake algorithms in replicating the emotional complexity inherent in human speech, thus providing a semantic layer of analysis that enhances the detection process. The robustness of the proposed methodology has been rigorously evaluated across a variety of datasets, ensuring its efficacy is not confined to controlled conditions but extends to realistic and challenging environments. This was achieved through the use of data augmentation techniques, including the introduction of additive white noise, which serves to mimic the variabilities encountered in real-world audio processing. The results have shown that the system's performance is not only consistent across different datasets but also maintains high accuracy in the presence of background noise, particularly when trained with noise-augmented datasets. By leveraging emotional content as a distinctive feature and applying sophisticated machine learning techniques, it presents a robust framework for safeguarding against the manipulation of audio content. This methodological contribution is poised to enhance the integrity of digital communications in an era where synthetic media is proliferating at an unprecedented rate.
Publisher
Uniwersytet Warminsko-Mazurski
Reference49 articles.
1. Abramson A.S., Whalen D.H, Voice Onset Time (VOT), “50: Theoretical and practical issues in measuring voicing distinctions”, “Journal of phonetics” 2017, no 63, pp. 75–86.
2. Alegre F., Vipperla R., Amehraye A., Evans N.W.D., A new speaker verification spoofing countermeasure based on local binary patterns, “Interspeech” 2013.
3. Almutairi Z., Elgibreen H., A review of modern audio deepfake detection methods: challenges and future directions, “Algorithms” 2022, no. 15(5), p. 155.
4. Bhangale K.B., Kothandaraman M., Survey of deep learning paradigms for speech processing, “Wireless Personal Communications” 2022, no. 125(2), pp. 1913–1949.
5. Chakroborty S., Roy A., Saha G., Improved closed set text-independent speaker identification by combining mfcc with evidence from flipped filter banks, “World Academy of Science, Engineering and Technology, International Journal of Electrical, Computer, Energetic, Electronic and Communication Engineering” 2008, vol. 2, pp. 2554–2561.