Author:
Tasaki Tsuyoshi, ,Tokura Seiji,Sonoura Takafumi,Ozaki Fumio,Matsuhira Nobuto
Abstract
For a mobile robot self-localization and knowledge of the location of all obstacles around it is essential. Moreover, classification of the obstacles as stable or unstable and fast self-localization using a single sensor such as an omnidirectional camera are also important to achieve smooth movements and to reduce the cost of the robot. However, there are few studies on locating and classifying all obstacles around the robot and localizing its self-position fast during its motion by using only one omnidirectional camera. In order to locate obstacles and localize the robot, we have developed a new method that uses two kinds of points that can be detected and tracked fast even in omnidirectional images. In the obstacle location and classification process, we use floor boundary points where the distance from the robot can be measured using an omnidirectional camera. By tracking those points, we can classify obstacles by comparing the movement of each tracked point with odometry data. Our method changes a threshold to detect the points based on the result of this comparison in order to enhance classification. In the self-localization process, we use tracked scale and rotation invariant feature points as new landmarks that are detected for a long time by using both a fast tracking method and a slow Speed Up Robust Features (SURF) method. Once landmarks are detected, they can be tracked fast. Therefore, we can achieve fast self-localization. The classification ratio of our method is 85.0%, which is four times higher than that of a previous method. Our robot can localize 2.9 times faster and 4.2 times more accurately by using our method, in comparison to the use of the SURF method alone.
Publisher
Fuji Technology Press Ltd.
Subject
Electrical and Electronic Engineering,General Computer Science
Reference28 articles.
1. Z. Jia, A. Balasuriya, and S. Challa, “Sensor Fusion based 3D Target Visual Tracing for Autono-mous Vehicles with IMM,” Int. Conf. on Robotics and Automation, pp. 1841-1846, 2005.
2. M. Weser, D. Westhoff, M. Hiiser, and J. Zhang, “Multimodal People Tracking and Trajectory Prediction based on Learned Generalized Motion Patterns,” Int. Conf. on Multisensor Fusion and Integration for Intel-ligent Systems, pp. 541-546, 2006.
3. Z. Chen and S. T. Birchfield, “Person Following with a Mobile Robot Using Binocular Feature-Based Tracking,” Int. Conf. on Intelligent Robots and Systems, pp. 815-820, 2007.
4. K. Yamazawa, Y. Yagi, and M. Yachida, “HyperOmni vision: Visual navigation with an omnidirecional image sensor,” Systems and Computers in Japan, Vol.28, No.4, pp. 36-47, 1997.
5. Y. Yagi, H. Nagai, K. Yamazawa, and M. Yachida, “Reactive Visual Navigation based on Omnidirectional Sensing - Path Following and Collision Avoidance -,” Int. Conf. on Intelligent Robots and Systems, pp. 58-63, 1999.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献