Author:
Parrell Benjamin,Ramanarayanan Vikram,Nagarajan Srikantan,Houde John
Abstract
ABSTRACTWe present a new computational model of speech motor control: the Feedback-Aware Control of Tasks in Speech or FACTS model. This model is based on a state feedback control architecture, which is widely accepted in non-speech motor domains. The FACTS model employs a hierarchical observer-based architecture, with a distinct higher-level controller of speech tasks and a lower-level controller of speech articulators. The task controller is modeled as a dynamical system governing the creation of desired constrictions in the vocal tract, based on the Task Dynamics model. Critically, both the task and articulatory controllers rely on an internal estimate of the current state of the vocal tract to generate motor commands. This internal state estimate is derived from initial predictions based on efference copy of applied controls. The resulting state estimate is then used to generate predictions of expected auditory and somatosensory feedback, and a comparison between predicted feedback and actual feedback is used to update the internal state prediction. We show that the FACTS model is able to qualitatively replicate many characteristics of the human speech system: the model is robust to noise in both the sensory and motor pathways, is relatively unaffected by a loss of auditory feedback but is more significantly impacted by the loss of somatosensory feedback, and responds appropriately to externally-imposed alterations of auditory and somatosensory feedback. The model also replicates previously hypothesized trade-offs between reliance on auditory and somatosensory feedback in speech motor control and shows for the first time how this relationship may be mediated by acuity in each sensory domain. These results have important implications for our understanding of the speech motor control system in humans.
Publisher
Cold Spring Harbor Laboratory
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献