Example-guided physically based modal sound synthesis

Author:

Ren Zhimin1,Yeh Hengchin1,Lin Ming C.1

Affiliation:

1. University of North Carolina at Chapel Hill

Abstract

Linear modal synthesis methods have often been used to generate sounds for rigid bodies. One of the key challenges in widely adopting such techniques is the lack of automatic determination of satisfactory material parameters that recreate realistic audio quality of sounding materials. We introduce a novel method using prerecorded audio clips to estimate material parameters that capture the inherent quality of recorded sounding materials. Our method extracts perceptually salient features from audio examples. Based on psychoacoustic principles, we design a parameter estimation algorithm using an optimization framework and these salient features to guide the search of the best material parameters for modal synthesis. We also present a method that compensates for the differences between the real-world recording and sound synthesized using solely linear modal synthesis models to create the final synthesized audio. The resulting audio generated from this sound synthesis pipeline well preserves the same sense of material as a recorded audio example. Moreover, both the estimated material parameters and the residual compensation naturally transfer to virtual objects of different sizes and shapes, while the synthesized sounds vary accordingly. A perceptual study shows the results of this system compare well with real-world recordings in terms of material perception.

Publisher

Association for Computing Machinery (ACM)

Subject

Computer Graphics and Computer-Aided Design

Reference50 articles.

1. Audiokinetic. 2011. Wwise SoundSeed Impact. http://www.audiokinetic. com/en/products/wwise-add-ons/soundseed/introduction Audiokinetic. 2011. Wwise SoundSeed Impact. http://www.audiokinetic. com/en/products/wwise-add-ons/soundseed/introduction

2. A method for registration of 3-D shapes

3. Fast modal sounds with scalable frequency-domain synthesis

4. Harmonic shells

Cited by 61 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. DiffSound: Differentiable Modal Sound Rendering and Inverse Rendering for Diverse Inference Tasks;Special Interest Group on Computer Graphics and Interactive Techniques Conference Conference Papers '24;2024-07-13

2. A Novel Visuo-Tactile Object Recognition Pipeline using Transformers with Feature Level Fusion;2024 International Joint Conference on Neural Networks (IJCNN);2024-06-30

3. A multimodal multitask deep learning framework for vibrotactile feedback and sound rendering;Scientific Reports;2024-06-10

4. Real‐time Neural Rendering of Dynamic Light Fields;Computer Graphics Forum;2024-04-23

5. Listen2Scene: Interactive material-aware binaural sound propagation for reconstructed 3D scenes;2024 IEEE Conference Virtual Reality and 3D User Interfaces (VR);2024-03-16

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3