Affiliation:
1. Center for Artificial Intelligence Technology, Universiti Kebangsaan Malaysia, Bangi, Malaysia
2. Center for Cyber Security, Universiti Kebangsaan Malaysia, Bangi, Malaysia
Abstract
Simultaneous localization and mapping (SLAM) is a fundamental problem in robotics and computer vision. It involves the task of a robot or an autonomous system navigating an unknown environment, simultaneously creating a map of the surroundings, and accurately estimating its position within that map. While significant progress has been made in SLAM over the years, challenges still need to be addressed. One prominent issue is robustness and accuracy in dynamic environments, which can cause uncertainties and errors in the estimation process. Traditional methods using temporal information to differentiate static and dynamic objects have limitations in accuracy and applicability. Nowadays, many research trends have leaned towards utilizing deep learning-based methods which leverage the capabilities to handle dynamic objects, semantic segmentation, and motion estimation, aiming to improve accuracy and adaptability in complex scenes. This article proposed an approach to enhance monocular visual odometry’s robustness and precision in dynamic environments. An enhanced algorithm using the semantic segmentation algorithm DeeplabV3+ is used to identify dynamic objects in the image and then apply the motion consistency check to remove feature points belonging to dynamic objects. The remaining static feature points are then used for feature matching and pose estimation based on ORB-SLAM2 using the Technical University of Munich (TUM) dataset. Experimental results show that our method outperforms traditional visual odometry methods in accuracy and robustness, especially in dynamic environments. By eliminating the influence of moving objects, our method improves the accuracy and robustness of visual odometry in dynamic environments. Compared to the traditional ORB-SLAM2, the results show that the system significantly reduces the absolute trajectory error and the relative pose error in dynamic scenes. Our approach has significantly improved the accuracy and robustness of the SLAM system’s pose estimation.
Funder
Universiti Kebangsaan Malaysia
Reference45 articles.
1. ArUcoRSV: robot localisation using artificial marker;Azmi,2019
2. SegNet: a deep convolutional encoder-decoder architecture for image segmentation;Badrinarayanan;IEEE Transactions on Pattern Analysis and Machine Intelligence,2017
3. Speeded-up robust features (SURF);Bay;Computer Vision and Image Understanding,2008
4. DynaSLAM: tracking, mapping, and inpainting in dynamic scenes;Bescos;IEEE Robotics and Automation Letters,2018
5. Simultaneous localization and mapping: a survey of current trends in autonomous driving;Bresson;IEEE Transactions on Intelligent Vehicles,2017