Affiliation:
1. Data Processing Institute, Technische Universität München, 80290 Munich, Germany
Abstract
Telepresence is generally described as the feeling of being immersed in a remote environment, be it virtual or real. A multimodal telepresence environment, equipped with modalities such as vision, audition, and haptic, improves immersion and augments the overall perceptual presence. The present work focuses on acoustic telepresence at both the teleoperator and operator sites. On the teleoperator side, we build a novel binaural sound source localizer using generic Head Related Transfer Functions (HRTFs). This new localizer provides estimates for the direction of a single sound source given in terms of azimuth and elevation angles in free space by using only two microphones. It also uses an algorithm that is efficient compared to the currently known algorithms used in similar localization processes. On the operator side, the paper addresses the problem of spatially interpolating HRTFs for densely sampled high-fidelity 3D sound synthesis. In our telepresence application scenario the synthesized 3D sound is presented to the operator over headphones and shall achieve a high-fidelity acoustic immersion. Using measured HRTF data, we create interpolated HRTFs between the existing functions using a matrix-valued interpolation function. The comparison with existing interpolation methods reveals that our new method offers superior performance and is capable of achieving high-fidelity reconstructions of HRTFs.
Subject
Computer Vision and Pattern Recognition,Human-Computer Interaction,Control and Systems Engineering,Software
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献