Abstract
AbstractMotor adaptation to novel dynamics occurs rapidly using sensed errors to update the current motor memory. This adaption is strongly driven by proprioceptive and visual signals, that indicate errors in the motor memory. Here we extend this previous work by investigating whether the presence of additional visual cues could increase the rate of motor adaptation, specifically when the visual motion cue is congruent with the dynamics. Six groups of participants performed reaching movements while grasping the handle of a robotic manipulandum while an additional cue (red object) was connected to the cursor. After a baseline, either a unidirectional (3 groups) or bidirectional (3 groups) velocity dependent force field was applied during the reach. For each group, the movement of the red object relative to the cursor was either congruent with the force field dynamics, incongruent with the force field dynamics or constant (fixed distance). Participants adapted more to the unidirectional force fields than to the bidirectional force field groups. However, across both force fields, groups in which the visual cues matched the type of force field (congruent visual cue) exhibited higher final adaptation level at the end of learning compared to either the control or incongruent conditions. In all groups, we observed an additional congruent cue assisted the formation of the motor memory of the external dynamics. We then demonstrate that a state estimation based model that integrates proprioceptive and visual information can successfully replicate the experimental data.New & NoteworthyWe demonstrate that adaptation to novel dynamics is stronger when additional online visual cues that are congruent with the dynamics are presented during adaptation, compared to either a constant or incongruent visual cue. This effect was found regardless of whether a bidirectional or unidirectional velocity dependent force field was presented to the participants. We propose that this effect might arise through the inclusion of this additional visual cue information within the state estimation process.
Publisher
Cold Spring Harbor Laboratory