Affiliation:
1. University of California at Santa Barbara
2. Osaka University
Abstract
Drone navigation in complex environments poses many problems to teleoperators. Especially in three dimensional (3D) structures such as buildings or tunnels, viewpoints are often limited to the drone’s current camera view, nearby objects can be collision hazards, and frequent occlusion can hinder accurate manipulation.
To address these issues, we have developed a novel interface for teleoperation that provides a user with environment-adaptive viewpoints that are automatically configured to improve safety and provide smooth operation. This real-time adaptive viewpoint system takes robot position, orientation, and 3D point-cloud information into account to modify the user’s viewpoint to maximize visibility. Our prototype uses simultaneous localization and mapping (SLAM) based reconstruction with an omnidirectional camera, and we use the resulting models as well as simulations in a series of preliminary experiments testing navigation of various structures. Results suggest that automatic viewpoint generation can outperform first- and third-person view interfaces for virtual teleoperators in terms of ease of control and accuracy of robot operation.
Funder
Grant-in-Aid for Challenging Exploratory Research
Publisher
Association for Computing Machinery (ACM)
Subject
Artificial Intelligence,Human-Computer Interaction
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献