Abstract
AbstractOne of the greatest challenges in the development of binaural machine audition systems is the disambiguation between front and back audio sources, particularly in complex spatial audio scenes. The goal of this work was to develop a method for discriminating between front and back located ensembles in binaural recordings of music. To this end, 22, 496 binaural excerpts, representing either front or back located ensembles, were synthesized by convolving multi-track music recordings with 74 sets of head-related transfer functions (HRTF). The discrimination method was developed based on the traditional approach, involving hand-engineering of features, as well as using a deep learning technique incorporating the convolutional neural network (CNN). According to the results obtained under HRTF-dependent test conditions, CNN showed a very high discrimination accuracy (99.4%), slightly outperforming the traditional method. However, under the HRTF-independent test scenario, CNN performed worse than the traditional algorithm, highlighting the importance of testing the algorithms under HRTF-independent conditions and indicating that the traditional method might be more generalizable than CNN. A minimum of 20 HRTFs are required to achieve a satisfactory generalization performance for the traditional algorithm and 30 HRTFs for CNN. The minimum duration of audio excerpts required by both the traditional and CNN-based methods was assessed as 3 s. Feature importance analysis, based on a gradient attribution mapping technique, revealed that for both the traditional and the deep learning methods, a frequency band between 5 and 6 kHz is particularly important in terms of the discrimination between front and back ensemble locations. Linear-frequency cepstral coefficients, interaural level differences, and audio bandwidth were identified as the key descriptors facilitating the discrimination process using the traditional approach.
Funder
ministerstwo nauki i szkolnictwa wyższego
Publisher
Springer Science and Business Media LLC
Subject
Electrical and Electronic Engineering,Acoustics and Ultrasonics
Reference69 articles.
1. F. Rumsey, Spatial quality evaluation for reproduced sound: terminology, meaning, and a scene-based paradigm. J. Audio Eng. Soc. 50(9), 651–666 (2002)
2. J. Blauert, Spatial hearing. The psychology of human sound localization (MIT Press, London, 1974), pp. 179–180
3. N. Ma, T. May, G.J. Brown, Exploiting deep neural networks and head movements for robust binaural localization of multiple sources in reverberant environments. IEEE/ACM Trans. Audio Speech Lang. Process. 25(12), 2444–2453 (2017). https://doi.org/10.1109/TASLP.2017.2750760
4. T. May, N. Ma, G.J. Brown, in Proc. of the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), Robust localisation of multiple speakers exploiting head movements and multi-conditional training of binaural cues IEEE, Brisbane, 2015, pp. 2679–2683
5. T. Usagawa, A. Saho, K. Imamura, Y. Chisaki, in 2011 IEEE Region 10 Conference TENCON. A solution of front-back confusion within binaural processing by an estimation method of sound source direction on sagittal coordinate (Bali, Indonesia, 2011), pp. 1–4. https://doi.org/10.1109/TENCON.2011.6129051
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献