Abstract
AbstractSensory information from movements of body parts can alter their position when exposed to external physical stimuli. Visual information monitors the position and movement of body parts from an exterior perspective, whereas somatosensory information monitors them from an internal viewpoint. However, how such sensory data are integrated is unclear. In this study, a virtual reality (VR) system was used to evaluate the influence of the temporal difference between visual and somatosensory information from hand movements on the directional perception of a torque while modifying the visual appearance (human hand vs. non-human object) and visuohaptic congruency (congruent vs. incongruent) of self-avatars. Visual information was provided by the movement of the self-avatars in a VR environment, while somatosensory information was provided by vibrations with asymmetrical amplitudes that gave the participants the sensation of being continuously pushed or pulled without actually moving any body part. Delaying the movement of the avatar by 50 ms resulted in the sensitivity of the force direction perception to be lower with human hands than with non-human avatars, whereas a delay of 200 ms resulted in a higher sensitivity. This study can contribute to applications requiring multisensory integration in a VR environment.
Funder
Japan Society for the Promotion of Science
Support Center for Advanced Telecommunications Technology Research Foundation
Publisher
Springer Science and Business Media LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献