Author:
Borodaeva Zhanna,Winkler Sven,Brade Jennifer,Klimant Philipp,Jahn Georg
Abstract
Keeping track of locations across self-motion is possible by continuously updating spatial representations or by encoding and later instantaneously retrieving spatial representations. In virtual reality (VR), sensory cues to self-motion used in continuous updating are typically reduced. In passive translation compared to real walking in VR, optic flow is available but body-based (idiothetic) cues are missing. With both kinds of translation, boundaries and landmarks as static visual cues can be used for instantaneous updating. In two experiments, we let participants encode two target locations, one of which had to be reproduced by pointing after forward translation in immersive VR (HMD). We increased sensory cues to self-motion in comparison to passive translation either by strengthening optic flow or by real walking. Furthermore, we varied static visual cues in the form of boundaries and landmarks inside boundaries. Increased optic flow and real walking did not reliably increase performance suggesting that optic flow even in a sparse environment was sufficient for continuous updating or that merely instantaneous updating took place. Boundaries and landmarks, however, did support performance as quantified by decreased bias and increased precision, particularly if they were close to or even enclosed target locations. Thus, enriched spatial context is a viable method to support spatial updating in VR and synthetic environments (teleoperation). Spatial context does not only provide a static visual reference in offline updating and continuous allocentric self-location updating but also, according to recent neuroscientific evidence on egocentric bearing cells, contributes to continuous egocentric location updating as well.
Funder
Deutsche Forschungsgemeinschaft
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献