Abstract
Abstract
Lidar and camera are the two most frequently used types of sensors in the fields of autonomous driving and mobile robots. The data fusion of ontology positioning and mapping has become a popular research direction in the field of simultaneous localization and mapping. Considering the characteristics of a planar mobile robot, this paper proposes an image semantics-based method to solve the inter-frame motion of the laser point cloud to achieve the fast real-time positioning of a mobile robot. First, the image cascade network is used to convert image samples to different resolutions, and network branches of different complexity are gradually fused into the final finer semantic segmentation result. Then, through the rapid segmentation and processing of the laser point cloud data, key points and surfels are extracted. The unified framework of semantic-assisted inter-frame motion estimation is established using semantic image data and point-cloud key-feature information. Finally, the stability of feature extraction, the accuracy of motion estimation, and the efficiency measured by calculation time are verified experimentally. The experimental results show that the standard deviation of the estimated motion is less than 0.0025, and the single operation time of the whole system is about 38 ms.
Subject
Applied Mathematics,Instrumentation,Engineering (miscellaneous)