Development of Supervised Speaker Diarization System Based on the PyAnnote Audio Processing Library
Author:
Khoma Volodymyr12ORCID, Khoma Yuriy23ORCID, Brydinskyi Vitalii23ORCID, Konovalov Alexander3
Affiliation:
1. Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, 45-758 Opole, Poland 2. Institute of Computer Technologies, Automation and Metrology, Lviv Polytechnic National University, 79013 Lviv, Ukraine 3. Vidby AG, Suurstoffi 8, 6343 Rotkreuz, Switzerland
Abstract
Diarization is an important task when work with audiodata is executed, as it provides a solution to the problem related to the need of dividing one analyzed call recording into several speech recordings, each of which belongs to one speaker. Diarization systems segment audio recordings by defining the time boundaries of utterances, and typically use unsupervised methods to group utterances belonging to individual speakers, but do not answer the question “who is speaking?” On the other hand, there are biometric systems that identify individuals on the basis of their voices, but such systems are designed with the prerequisite that only one speaker is present in the analyzed audio recording. However, some applications involve the need to identify multiple speakers that interact freely in an audio recording. This paper proposes two architectures of speaker identification systems based on a combination of diarization and identification methods, which operate on the basis of segment-level or group-level classification. The open-source PyAnnote framework was used to develop the system. The performance of the speaker identification system was verified through the application of the AMI Corpus open-source audio database, which contains 100 h of annotated and transcribed audio and video data. The research method consisted of four experiments to select the best-performing supervised diarization algorithms on the basis of PyAnnote. The first experiment was designed to investigate how the selection of the distance function between vector embedding affects the reliability of identification of a speaker’s utterance in a segment-level classification architecture. The second experiment examines the architecture of cluster-centroid (group-level) classification, i.e., the selection of the best clustering and classification methods. The third experiment investigates the impact of different segmentation algorithms on the accuracy of identifying speaker utterances, and the fourth examines embedding window sizes. Experimental results demonstrated that the group-level approach offered better identification results were compared to the segment-level approach, and the latter had the advantage of real-time processing.
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference29 articles.
1. Speaker Diarization: A Review of Recent Research;Anguera;IEEE Trans. Audio Speech Lang. Process.,2012 2. Speaker recognition based on deep learning: An overview;Bai;Neural Netw.,2021 3. Mao, H.H., Li, S., McAuley, J., and Cottrell, G.W. (2020). Speech Recognition and Multi-Speaker Diarization of Long Conversations. arXiv. 4. Inaguma, H., Yan, B., Dalmia, S.S., Gu, P., Jiatong Shi, J., Duh, K., and Watanabe, S. (2021). ESPnet-ST IWSLT 2021 Offline Speech Translation System. arXiv. 5. Ueda, Y., Maiti, S., Watanabe, S., Zhang, C., Yu, M., Zhang, S.X., and Xu, Y. (2022, November 24). EEND-SS: Joint End-to-End Neural Speaker Diarization and Speech Separation for Flexible Number of Speakers. Available online: https://arxiv.org/pdf/2203.17068v1.pdf.
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|