Abstract
AbstractIn this paper, we introduce a word repetition generative model (WORM), which—when combined with an appropriate belief updating scheme—is capable of inferring the word that should be spoken when presented with an auditory cue. Our generative model takes a deep temporal form, combining both discrete and continuous states. This allows a (synthetic) WORM agent to perform categorical inference on continuous acoustic signals, and—based on the same model—to repeat heard words at the appropriate time. From the perspective of word production, the model simulates how high-level beliefs about discrete lexical, prosodic and context attributes give rise to continuous acoustic signals at the sensory level. From the perspective of word recognition, it simulates how continuous acoustic signals are recognised as words and, how (and when) they should be repeated. We establish the face validity of our generative model by simulating a word repetition paradigm in which a synthetic agent or a human subject hears a target word and subsequently reproduces that word. The repeated word should be the target word but differs acoustically. The results of these simulations reveal how the generative model correctly infers what must be repeated, to the extent it can successfully interact with a human subject. This provides a formal process theory of auditory perception and production that can be deployed in health and disease. We conclude with a discussion of how the generative model could be scaled-up to include a larger phonetic and phonotactic repertoire, complex higher-level attributes (e.g., semantic, concepts, etc.), and produce more elaborate exchanges.
Publisher
Cold Spring Harbor Laboratory
Reference56 articles.
1. Battenberg E , Chen J , Child R , Coates A , Li YGY , Liu H , Satheesh S , Sriram A , Zhu Z (2017) Exploring neural transducers for end-to-end speech recognition. In: 2017 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), pp 206–213: IEEE.
2. The role of intonation in emotional expressions;Speech Communication,2005
3. Bourlard H , Morgan N (1994) Connectionist speech recognition: a hybrid approach, ser. In: The Kluwer International Series in Engineering and Computer Science. Boston ….
4. Chan W , Jaitly N , Le Q , Vinyals O (2016) Listen, attend and spell: A neural network for large vocabulary conversational speech recognition. In: 2016 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), pp 4960–4964: IEEE.
5. A unified neurocomputational bilateral model of spoken language production in healthy participants and recovery in poststroke aphasia
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献