Abstract
Abstract
To solve the accurate positioning problem of mobile robots, simultaneous localization and mapping (SLAM) or visual odometry (VO) based on visual information are widely used. However, most visual SLAM or VO cannot meet the accuracy requirements in dynamic indoor environments. This paper proposes a robust visual odometry based on deep learning to eliminate feature points matching error. However, when a camera and dynamic objects are in relative motion, the frames of camera will produce ghosting, especially in high-dynamic environments, which bring additional positioning error; in view of this problem, a novel method based on the average optical flow value of the dynamic region is proposed to identify feature points of the ghosting, and then the feature points of the ghosting and dynamic region are removed. After the remaining feature points are matched, we use a non-linear optimization method to calculate the pose. The proposed algorithm is tested on TUM RGB-D dataset, and the results show that our VO improves the positioning accuracy than other robust SLAM or VO and is strongly robust especially in high-dynamic environments.
Funder
Outstanding Foreign Scientist Support Project in Henan Province
Outstanding Young Teacher Development Fund of Zhengzhou University
Science and Technology Innovation Research Team Support Plan of Henan Province
National Natural Science Foundation of China
Subject
Applied Mathematics,Instrumentation,Engineering (miscellaneous)
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献