Affiliation:
1. Department of Electrical and Computer Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
2. Artificial Intelligence Department, Sungkyunkwan University, Suwon 16419, Republic of Korea
3. Department of Intelligent Software, Sungkyunkwan University, Suwon 16419, Republic of Korea
Abstract
Sound localization is a crucial aspect of human auditory perception. VR (virtual reality) technologies provide immersive audio platforms that allow human listeners to experience natural sounds based on their ability to localize sound. However, the simulations of sound generated by these platforms, which are based on the general head-related transfer function (HRTF), often lack accuracy in terms of individual sound perception and localization due to significant individual differences in this function. In this study, we aimed to investigate the disparities between the perceived locations of sound sources by users and the locations generated by the platform. Our goal was to determine if it is possible to train users to adapt to the platform-generated sound sources. We utilized the Microsoft HoloLens 2 virtual platform and collected data from 12 subjects based on six separate training sessions arranged in 2 weeks. We employed three modes of training to assess their effects on sound localization, in particular for studying the impacts of multimodal error, visual, and sound guidance in combination with kinesthetic/postural guidance, on the effectiveness of the training. We analyzed the collected data in terms of the training effect between pre- and post-sessions as well as the retention effect between two separate sessions based on subject-wise paired statistics. Our findings indicate that, as far as the training effect between pre- and post-sessions is concerned, the effect is proven to be statistically significant, in particular in the case wherein kinesthetic/postural guidance is mixed with visual and sound guidance. Conversely, visual error guidance alone was found to be largely ineffective. On the other hand, as far as the retention effect between two separate sessions is concerned, we could not find any meaningful statistical implication on the effect for all three error guidance modes out of the 2-week session of training. These findings can contribute to the improvement of VR technologies by ensuring they are designed to optimize human sound localization abilities.
Funder
National Research Foundation (NRF) of Korea
Korea Ministry of Science and ICT
AI Graduate School Program
ICT Consilience Program
Korean Ministry of Science and Information Technology
Reference35 articles.
1. The ultimate display;Sutherland;Proc. IFIP Congr.,1965
2. On the relative importance of visual and spatial audio rendering on vr immersion;Potter;Front. Signal Process.,2022
3. A DNN-Based Personalized HRTF Estimation Method for 3D Immersive Audio;Son;Int. J. Internet Broadcast. Commun.,2021
4. Richter, J.-G. (2019). Fast Measurement of Individual Head-Related Transfer Functions, Logos Verlag.
5. Miccini, R., and Spagnol, S. (2020, January 22–26). HRTF individualization using deep learning. Proceedings of the 2020 IEEE Conference on Virtual Reality and 3D User Interfaces Abstracts and Workshops (VRW), Atlanta, GA, USA.