Neural bases of proactive and predictive processing of meaningful sub-word units in speech comprehension

Author:

Matar SuhailORCID,Marantz AlecORCID

Abstract

AbstractTo comprehend speech, human brains identify meaningful units in the speech stream. But whereas the English ‘She believed him.’ has 3 words, the Arabic equivalent ‘ṣaddaqathu.’ is a single word with 3 meaningful sub-word units, called morphemes: a verb stem (‘ṣaddaqa’), a subject suffix (‘-t-’), and a direct object pronoun (‘-hu’). It remains unclear whether and how the brain processes morphemes, above and beyond other language units, during speech comprehension. Here, we propose and test hierarchically-nested encoding models of speech comprehension: a NAÏVE model with word-, syllable-, and sound-level information; a BOTTOM-UP model with additional morpheme boundary information; and PREDICTIVE models that process morphemes before these boundaries. We recorded magnetoencephalography (MEG) data as participants listened to Arabic sentences like ‘ṣaddaqathu.’. A temporal response function (TRF) analysis revealed that in temporal and left inferior frontal regions PREDICTIVE models outperform the BOTTOM-UP model, which outperforms the NAÏVE model. Moreover, verb stems were either length-AMBIGUOUS (e.g., ‘ṣaddaqa’ could initially be mistaken for the shorter stem ‘ṣadda’=‘blocked’) or length-UNAMBIGUOUS (e.g., ‘qayyama’=‘evaluated’ cannot be mistaken for a shorter stem), but shared a uniqueness point, at which stem identity is fully disambiguated. Evoked analyses revealed differences between conditions before the uniqueness point, suggesting that, rather than await disambiguation, the brain employs PROACTIVE PREDICTIVE strategies, processing the accumulated input as soon as any possible stem is identifiable, even if not unique. These findings highlight the role of morpheme processing in speech comprehension, and the importance of including morpheme-level information in neural and computational models of speech comprehension.Significance statementMany leading models of speech comprehension include information about words, syllables and sounds. But languages vary considerably in the amount of meaning packed into word units. This work proposes speech comprehension models with information about meaningful sub-word units, called morphemes (e.g., ‘bake-’ and ‘-ing’ in ‘baking’), and shows that they explain significantly more neural activity than models without morpheme information. We also show how the brain predictively processes morphemic information. These findings highlight the role of morphemes in speech comprehension and emphasize the contributions of morpheme-level information-theoretic metrics, like surprisal and entropy. Our models can be used to update current neural, cognitive, and computational models of speech comprehension, and constitute a step towards refining those models for naturalistic, connected speech.

Publisher

Cold Spring Harbor Laboratory

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3