Abstract
AbstractVision provides the most important sensory information for spatial navigation. Recent technical advances allow new options to conduct more naturalistic experiments in virtual reality (VR) while additionally gather data of the viewing behavior with eye tracking investigations. Here, we propose a method that allows one to quantify characteristics of visual behavior by using graph-theoretical measures to abstract eye tracking data recorded in a 3D virtual urban environment.The analysis is based on eye tracking data of 20 participants, who freely explored the virtual city Seahaven for 90 minutes with an immersive VR headset with an inbuild eye tracker. To extract what participants looked at, we defined “gaze” events, from which we created gaze graphs. On these, we applied graph-theoretical measures to reveal the underlying structure of visual attention.Applying graph partitioning, we found that our virtual environment could be treated as one coherent city. To investigate the importance of houses in the city, we applied the node degree centrality measure. Our results revealed that 10 houses had a node degree that exceeded consistently two-sigma distance from the mean node degree of all other houses. The importance of these houses was supported by the hierarchy index, which showed a clear hierarchical structure of the gaze graphs. As these high node degree houses fulfilled several characteristics of landmarks, we named them “gaze-graph-defined landmarks”. Applying the rich club coefficient, we found that these gaze-graph-defined landmarks were preferentially connected to each other and that participants spend the majority of their experiment time in areas where at least two of those houses were visible.Our findings do not only provide new experimental evidence for the development of spatial knowledge, but also establish a new methodology to identify and assess the function of landmarks in spatial navigation based on eye tracking data.Author SummaryThe ability to navigate and orient ourselves in an unknown environment is important in everyday life. To better understand how we are able to learn about a new environment, it is important to study our behavior during the process of spatial navigation. New technical advances allow us to conduct studies in naturalistic virtual environments with participants wearing immersive VR-headsets. In addition, we can use eye trackers to observe the participant’s eye movements. This is interesting, because observing eye movements allows us to observe visual attention and, therefore, important cognitive processes. However, it can be difficult to analyze eye tracking data that was measured in a VR environment, as there is no established algorithm yet. Therefore, we propose a new method to analyze such eye tracking data. In addition, our method allows us to transform the eye tracking data into graphs which we can use to find new patterns in behavior that were not accessible before. Using this methodology, we found that participants who spend 90 min exploring a new virtual town, viewed some houses preferentially and we call them gaze-graph-defined landmarks. Our further analysis reveals also new characteristics of those houses that were not yet associated with landmarks.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献