Abstract
SUMMARYNavigation tasks are often subject to several constraints that can be related to the sensors (visibility) or come from the environment (obstacles). In this paper, we propose a framework for autonomous omnidirectional wheeled robots that takes into account both collision and occlusion risk, during sensor-based navigation. The task consists in driving the robot towards a visual target in the presence of static and moving obstacles. The target is acquired by fixed – limited field of view – on-board cameras, while the surrounding obstacles are detected by lidar scanners. To perform the task, the robot has not only to keep the target in view while avoiding the obstacles, but also to predict its location in the case of occlusion. The effectiveness of our approach is validated through several experiments.
Publisher
Cambridge University Press (CUP)
Subject
Computer Science Applications,General Mathematics,Software,Control and Systems Engineering
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献