Abstract
Environment perception remains one of the key tasks in autonomous driving for which solutions have yet to reach maturity. Multi-modal approaches benefit from the complementary physical properties specific to each sensor technology used, boosting overall performance. The added complexity brought on by data fusion processes is not trivial to solve, with design decisions heavily influencing the balance between quality and latency of the results. In this paper we present our novel real-time, 360∘ enhanced perception component based on low-level fusion between geometry provided by the LiDAR-based 3D point clouds and semantic scene information obtained from multiple RGB cameras, of multiple types. This multi-modal, multi-sensor scheme enables better range coverage, improved detection and classification quality with increased robustness. Semantic, instance and panoptic segmentations of 2D data are computed using efficient deep-learning-based algorithms, while 3D point clouds are segmented using a fast, traditional voxel-based solution. Finally, the fusion obtained through point-to-image projection yields a semantically enhanced 3D point cloud that allows enhanced perception through 3D detection refinement and 3D object classification. The planning and control systems of the vehicle receives the individual sensors’ perception together with the enhanced one, as well as the semantically enhanced 3D points. The developed perception solutions are successfully integrated onto an autonomous vehicle software stack, as part of the UP-Drive project.
Funder
European Union
Ministerul Cercetării și Inovării
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference54 articles.
1. UP-Drive H2020 European Union Program—Grant 688652
https://up-drive.ethz.ch/
2. Map Management for Efficient Long-Term Visual Localization in Outdoor Environments;Buerki;Proceedings of the IEEE Intelligent Vehicles Symposium (IV),2018
3. Super-sensor for 360-degree environment perception: Point cloud segmentation using image features;Varga;Proceedings of the 2017 IEEE 20th International Conference on Intelligent Transportation Systems (ITSC),2017
4. EB Assist ADTF-Automotive Data and Time-Triggered Framework
https://www.elektrobit.com/products/automated-driving/eb-assist/adtf/
5. Are we ready for autonomous driving? the kitti vision benchmark suite;Geiger;Proceedings of the 2012 IEEE Conference on Computer Vision and Pattern Recognition,2012
Cited by
16 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献