Accurate Visual Simultaneous Localization and Mapping (SLAM) against Around View Monitor (AVM) Distortion Error Using Weighted Generalized Iterative Closest Point (GICP)
Author:
Lee Yangwoo1ORCID, Kim Minsoo1ORCID, Ahn Joonwoo2ORCID, Park Jaeheung134ORCID
Affiliation:
1. Dynamic Robotic Systems (DYROS) Lab, Graduate School of Convergence Science and Technology, Seoul National University, Seoul 08826, Republic of Korea 2. Samsung Advanced Institute of Technology, Samsung Electronics, Suwon 16678, Republic of Korea 3. Automation and Systems Research Institute (ASRI), Research Institute for Convergence Science (RICS), Seoul National University, Seoul 08826, Republic of Korea 4. Advanced Institutes of Convergence Technology, Suwon 16229, Republic of Korea
Abstract
Accurately estimating the pose of a vehicle is important for autonomous parking. The study of around view monitor (AVM)-based visual Simultaneous Localization and Mapping (SLAM) has gained attention due to its affordability, commercial availability, and suitability for parking scenarios characterized by rapid rotations and back-and-forth movements of the vehicle. In real-world environments, however, the performance of AVM-based visual SLAM is degraded by AVM distortion errors resulting from an inaccurate camera calibration. Therefore, this paper presents an AVM-based visual SLAM for autonomous parking which is robust against AVM distortion errors. A deep learning network is employed to assign weights to parking line features based on the extent of the AVM distortion error. To obtain training data while minimizing human effort, three-dimensional (3D) Light Detection and Ranging (LiDAR) data and official parking lot guidelines are utilized. The output of the trained network model is incorporated into weighted Generalized Iterative Closest Point (GICP) for vehicle localization under distortion error conditions. The experimental results demonstrate that the proposed method reduces localization errors by an average of 39% compared with previous AVM-based visual SLAM approaches.
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference25 articles.
1. On the representation and estimation of spatial uncertainty;Smith;Int. J. Robot. Res.,1986 2. Karlsson, N., Di Bernardo, E., Ostrowski, J., Goncalves, L., Pirjanian, P., and Munich, M.E. (2005, January 18–22). The vSLAM algorithm for robust localization and mapping. Proceedings of the 2005 IEEE International Conference on Robotics and Automation, Barcelona, Spain. 3. Chen, W., Shang, G., Ji, A., Zhou, C., Wang, X., Xu, C., Li, Z., and Hu, K. (2022). An overview on visual slam: From tradition to semantic. Remote Sens., 14. 4. Challenges in monocular visual odometry: Photometric calibration, motion bias, and rolling shutter effect;Yang;IEEE Robot. Autom. Lett.,2018 5. Qin, T., Chen, T., Chen, Y., and Su, Q. (2020, January 25–29). Avp-slam: Semantic visual mapping and localization for autonomous vehicles in the parking lot. Proceedings of the 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Las Vegas, NV, USA.
|
|