Abstract
ABSTRACTWhen humans navigate through complex environments, they coordinate gaze and steering to efficiently sample the visual information needed to guide movement. Gaze and steering behavior during high-speed self-motion has been extensively studied in the context of automobile driving along a winding road. Theoretical accounts that have emerged from this work capture behavior during movement along explicit, well-defined paths over flat, obstacle-free ground surfaces. However, humans are also capable of visually guiding self-motion over uneven terrain that is cluttered with obstacles and may lack an explicit path. An extreme example of such behavior occurs during first-person view drone racing, in which pilots maneuver at high speeds through a dense forest. In this study, we explored the gaze and steering behavior of skilled drone pilots. Subjects guided a simulated quadcopter along a racecourse embedded within a forest-like virtual environment built in Unity. The environment was viewed through a head-mounted display while gaze behavior was recorded using an eye tracker. In two experiments, subjects performed the task in multiple conditions that varied in terms of the presence of obstacles (trees), waypoints (hoops to fly through), and a path to follow. We found that subjects often looked in the general direction of things that they wanted to steer toward, but gaze fell on nearby objects and surfaces more often than on the actual path or hoops. Nevertheless, subjects were able to perform the task successfully, steering at high speeds while remaining on the path, passing through hoops, and avoiding collisions. Furthermore, in conditions that contained hoops, subjects adapted how they approached the most immediate hoop in anticipation of the position (but not the orientation) of the subsequent hoop. Taken together, these findings challenge existing models of steering that assume that steering is tightly coupled to where actors look.
Publisher
Cold Spring Harbor Laboratory