Affiliation:
1. Radboud University, Donders Institute for Brain, Cognition and Behavior, Department of Biophysics, Nijmegen, the Netherlands
Abstract
In dynamic visual or auditory gaze double-steps, a brief target flash or sound burst is presented in midflight of an ongoing eye-head gaze shift. Behavioral experiments in humans and monkeys have indicated that the subsequent eye and head movements to the target are goal-directed, regardless of stimulus timing, first gaze shift characteristics, and initial conditions. This remarkable behavior requires that the gaze-control system 1) has continuous access to accurate signals about eye-in-head position and ongoing eye-head movements, 2) that it accounts for different internal signal delays, and 3) that it is able to update the retinal ( TE) and head-centric ( TH) target coordinates into appropriate eye-centered and head-centered motor commands on millisecond time scales. As predictive, feedforward remapping of targets cannot account for this behavior, we propose that targets are transformed and stored into a stable reference frame as soon as their sensory information becomes available. We present a computational model, in which recruited cells in the midbrain superior colliculus drive eyes and head to the stored target location through a common dynamic oculocentric gaze-velocity command, which is continuously updated from the stable goal and transformed into appropriate oculocentric and craniocentric motor commands. We describe two equivalent, yet conceptually different, implementations that both account for the complex, but accurate, kinematic behaviors and trajectories of eye-head gaze shifts under a variety of challenging multisensory conditions, such as in dynamic visual-auditory multisteps.
Funder
EU FP7 Marie-Curie ITN
EU Horizon 2020 ERC Adv Grant
Publisher
American Physiological Society
Subject
Physiology,General Neuroscience
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献