An Object-Centric Hierarchical Pose Estimation Method Using Semantic High-Definition Maps for General Autonomous Driving
Author:
Pyo Jeong-Won1ORCID, Choi Jun-Hyeon1ORCID, Kuc Tae-Yong1ORCID
Affiliation:
1. Department of Electrical and Computer Engineering, College of Information and Communication Engineering, Sungkyunkwan University, Suwon 16419, Republic of Korea
Abstract
To achieve Level 4 and above autonomous driving, a robust and stable autonomous driving system is essential to adapt to various environmental changes. This paper aims to perform vehicle pose estimation, a crucial element in forming autonomous driving systems, more universally and robustly. The prevalent method for vehicle pose estimation in autonomous driving systems relies on Real-Time Kinematic (RTK) sensor data, ensuring accurate location acquisition. However, due to the characteristics of RTK sensors, precise positioning is challenging or impossible in indoor spaces or areas with signal interference, leading to inaccurate pose estimation and hindering autonomous driving in such scenarios. This paper proposes a method to overcome these challenges by leveraging objects registered in a high-precision map. The proposed approach involves creating a semantic high-definition (HD) map with added objects, forming object-centric features, recognizing locations using these features, and accurately estimating the vehicle’s pose from the recognized location. This proposed method enhances the precision of vehicle pose estimation in environments where acquiring RTK sensor data is challenging, enabling more robust and stable autonomous driving. The paper demonstrates the proposed method’s effectiveness through simulation and real-world experiments, showcasing its capability for more precise pose estimation.
Funder
Technology Innovation Program Ministry of Trade, Industry & Energy
Reference64 articles.
1. Neven, D., De Brabandere, B., Georgoulis, S., Proesmans, M., and Van Gool, L. (2018, January 26–30). Towards end-to-end lane detection: An instance segmentation approach. Proceedings of the 2018 IEEE Intelligent Vehicles Symposium (IV), Changshu, China. 2. Yu, Z., Ren, X., Huang, Y., Tian, W., and Zhao, J. (2020, January 20–23). Detecting lane and road markings at a distance with perspective transformer layers. Proceedings of the 2020 IEEE 23rd International Conference on Intelligent Transportation Systems (ITSC), Rhodes, Greece. 3. Zheng, T., Fang, H., Zhang, Y., Tang, W., Yang, Z., Liu, H., and Cai, D. (2021, January 2–9). Resa: Recurrent feature-shift aggregator for lane detection. Proceedings of the AAAI Conference on Artificial Intelligence, Virtual. 4. Hou, Y., Ma, Z., Liu, C., and Loy, C.C. (November, January 27). Learning lightweight lane detection cnns by self attention distillation. Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Repuiblic of Korea. 5. Paszke, A., Chaurasia, A., Kim, S., and Culurciello, E. (2016). Enet: A deep neural network architecture for real-time semantic segmentation. arXiv.
|
|