Affiliation:
1. Department of Computer Science, University of British Columbia, Vancouver, British Columbia, V6T 1Z4 Canada
Abstract
We describe a methodology for virtual reality designers to capture and resynthesize the variations in sound made by objects when we interact with them through contact such as touch. The timbre of contact sounds can vary greatly, depending on both the listener’s location relative to the object, and the interaction point on the object itself. We believe that an accurate rendering of this variation greatly enhances the feeling of immersion in a simulation. To do this, we model the variation with an efficient algorithm based on modal synthesis. This model contains a vector field that is defined on the product space of contact locations and listening positions around the object. The modal data are sampled on this high dimensional space using an automated measuring platform. A parameter-fitting algorithm is presented that recovers the parameters from a large set of sound recordings around objects and creates a continuous timbre field by interpolation. The model is subsequently rendered in a real-time simulation with integrated haptic, graphic, and audio display. We describe our experience with an implementation of this system and an informal evaluation of the results.
Subject
Computer Vision and Pattern Recognition,Human-Computer Interaction,Control and Systems Engineering,Software
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献