The Impact of Singing on Visual and Multisensory Speech Perception in Children on the Autism Spectrum

Author:

Feldman Jacob I.12ORCID,Tu Alexander34ORCID,Conrad Julie G.35,Kuang Wayne36ORCID,Santapuram Pooja37ORCID,Woynaroski Tiffany G.1289ORCID

Affiliation:

1. Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN 37232, USA

2. Frist Center for Autism and Innovation, Vanderbilt University, Nashville, TN 37212, USA

3. Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN 37212, USA

4. Present address: Department of Otolaryngology and Communication Sciences, Medical College of Wisconsin, Milwaukee, WI 53226, USA

5. Present address: Department of Pediatrics, University of Illinois, Chicago, IL 60612, USA

6. Present address: Department of Pediatrics, Los Angeles County and University of Southern California (LAC + USC) Medical Center, University of Southern California, Los Angeles, CA 90033, USA

7. Present address: Department of Anesthesiology, Columbia University Irving Medical Center, New York, NY 10032, USA

8. Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN 37203, USA

9. Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN 37240, USA

Abstract

Abstract Autistic children show reduced multisensory integration of audiovisual speech stimuli in response to the McGurk illusion. Previously, it has been shown that adults can integrate sung McGurk tokens. These sung speech tokens offer more salient visual and auditory cues, in comparison to the spoken tokens, which may increase the identification and integration of visual speech cues in autistic children. Forty participants (20 autism, 20 non-autistic peers) aged 7–14 completed the study. Participants were presented with speech tokens in four modalities: auditory-only, visual-only, congruent audiovisual, and incongruent audiovisual (i.e., McGurk; auditory ‘ba’ and visual ‘ga’). Tokens were also presented in two formats: spoken and sung. Participants indicated what they perceived via a four-button response box (i.e., ‘ba’, ‘ga’, ‘da’, or ‘tha’). Accuracies and perception of the McGurk illusion were calculated for each modality and format. Analysis of visual-only identification indicated a significant main effect of format, whereby participants were more accurate in sung versus spoken trials, but no significant main effect of group or interaction effect. Analysis of the McGurk trials indicated no significant main effect of format or group and no significant interaction effect. Sung speech tokens improved identification of visual speech cues, but did not boost the integration of visual cues with heard speech across groups. Additional work is needed to determine what properties of spoken speech contributed to the observed improvement in visual accuracy and to evaluate whether more prolonged exposure to sung speech may yield effects on multisensory integration.

Funder

National Center for Advancing Translational Sciences

National Institute on Deafness and Other Communication Disorders

National Institutes of Health

Vanderbilt Institute for Clinical and Translational Research

Vanderbilt Undergraduate Summer Research Program

Publisher

Brill

Subject

Cognitive Neuroscience,Computer Vision and Pattern Recognition,Sensory Systems,Ophthalmology,Experimental and Cognitive Psychology

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3