Affiliation:
1. Chair of Applied Mechanics, Technical University of Munich, Boltzmannstraβe 15, 85478 Garching, Germany
Abstract
In order to achieve real autonomy, robots have to be able to navigate in completely unknown environments. Due to the complexity of computer vision algorithms, almost every approach for robotic navigation is either based on previous knowledge of the environment, such as markers or as resulting from learning methods, or makes strong simplifying assumptions about it (height-map representations, static scenarios). While showing impressive success in certain applications, these approaches limit the potential of legged robots to achieve the amazing flexibility of humans in more complex environments. In this work, we present a strategy for full 3D vision processing that is able to handle changing, dynamic environments. These are modeled using 3D geometries that are processed in real-time by the motion planner of our biped robot Lola for avoiding moving obstacles and walking over platforms. In order to allow for a more intuitive development of such systems in the future, we present tools for visualization including two mixed reality applications using both an external camera and Microsoft’s HoloLens. We validate our system in simulations and experiments with our full-size humanoid robot Lola and publish our framework open source for the benefit of the community.
Funder
Deutsche Forschungsgemeinschaft
Publisher
World Scientific Pub Co Pte Lt
Subject
Artificial Intelligence,Mechanical Engineering
Cited by
15 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献