Auditory-visual scenes for hearing research

Author:

van de Par StevenORCID,Ewert Stephan D.ORCID,Hladek Lubos,Kirsch Christoph,Schütze Julia,Llorca-Bofí JosepORCID,Grimm GisoORCID,Hendrikse Maartje M.E.ORCID,Kollmeier Birger,Seeber Bernhard U.ORCID

Abstract

While experimentation with synthetic stimuli in abstracted listening situations has a long standing and successful history in hearing research, an increased interest exists on closing the remaining gap towards real-life listening by replicating situations with high ecological validity in the lab. This is important for understanding the underlying auditory mechanisms and their relevance in real-life situations as well as for developing and evaluating increasingly sophisticated algorithms for hearing assistance. A range of ‘classical’ stimuli and paradigms have evolved to de-facto standards in psychoacoustics, which are simplistic and can be easily reproduced across laboratories. While they ideally allow for across laboratory comparisons and reproducible research, they, however, lack the acoustic stimulus complexity and the availability of visual information as observed in everyday life communication and listening situations. This contribution aims to provide and establish an extendable set of complex auditory-visual scenes for hearing research that allow for ecologically valid testing in realistic scenes while also supporting reproducibility and comparability of scientific results. Three virtual environments are provided (underground station, pub, living room), consisting of a detailed visual model, an acoustic geometry model with acoustic surface properties as well as a set of acoustic measurements in the respective real-world environments. The current data set enables i) audio–visual research in a reproducible set of environments, ii) comparison of room acoustic simulation methods with “ground truth” acoustic measurements, iii) a condensation point for future extensions and contributions for developments towards standardized test cases for ecologically valid hearing research in complex scenes.

Funder

Deutsche Forschungsgemeinschaft

Publisher

EDP Sciences

Subject

Electrical and Electronic Engineering,Speech and Hearing,Computer Science Applications,Acoustics and Ultrasonics

Cited by 7 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3