Affiliation:
1. School of Computer Science and Information Engineering, Hefei University of Technology, China
2. College of System Engineering, National University of Defense Technology, China
Abstract
The development of speech synthesis technology has increased the attention toward the threat of spoofed speech. Although various high-performance spoofing countermeasures have been proposed in recent years, a particular scenario is overlooked: partially spoofed audio, where spoofed utterances may contain both spoofed and bona fide segments. Currently, the research on partially spoofed speech detection is lacking. The existing methods either train with partially spoofed speech at utterance level, resulting in gradient conflicting at the segment level, or directly train with segment level data, which requires segment labels that are difficult to obtain in practice. In this study, to better detect partially spoofed speech when only utterance labels are available, we formulate partially spoofed speech detection into a multiple instance learning (MIL) problem. The typical MIL uses a pooling layer to fuse patch scores as a whole, and we propose a hybrid MIL (H-MIL) framework based on max and log-sum-exp pooling methods, which can learn better segment representations to improve partially spoofed speech detection performance. Theoretical and experimental verification shows that H-MIL can effectively relieve the gradient conflicting and gradient vanishing problems. In addition, we analyze the local correlations between segments and introduce a local self-attention mechanism to enhance segment features, which further promotes the detection performance.
In our experiments, we provide not only detection results at the segment and utterance levels but also some detailed visualization analysis, including the effect of spoof ratio and cross-dataset detection. The experimental results demonstrate the effective detection performance of our method at both the utterance and segment levels, especially when dealing with low spoof ratio attacks. The results confirm that our approach can better deal with partially spoofed speech detection than previous methods.
Funder
National Natural Science Foundation of China
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Theoretical Computer Science
Reference47 articles.
1. Aäron van den Oord Sander Dieleman Heiga Zen Karen Simonyan Oriol Vinyals Alex Graves Nal Kalchbrenner Andrew W. Senior and Koray Kavukcuoglu. 2016. WaveNet: A generative model for raw audio. arXiv:1609.03499. Retrieved from http://arxiv.org/abs/1609.03499
2. Yuxuan Wang R. J. Skerry-Ryan Daisy Stanton Yonghui Wu Ron J. Weiss Navdeep Jaitly Zongheng Yang Ying Xiao Zhifeng Chen Samy Bengio Quoc V. Le Yannis Agiomyrgiannakis Rob Clark and Rif A. Saurous. 2017. Tacotron: A fully end-to-end text-to-speech synthesis model. In Proceedings of the Conference of the International Speech Communication Association (INTERSPEECH’17) . 4006–4010.
3. Chris Donahue Julian J. McAuley and Miller S. Puckette. 2018. Synthesizing audio with generative adversarial networks. arXiv:1802.04208. Retrieved from http://arxiv.org/abs/1802.04208
4. ASVspoof 2015: the first automatic speaker verification spoofing and countermeasures challenge
5. The ASVspoof 2017 Challenge: Assessing the Limits of Replay Spoofing Attack Detection