General-Purpose Deep Learning Detection and Segmentation Models for Images from a Lidar-Based Camera Sensor
Author:
Yu Xianjia1ORCID, Salimpour Sahar1ORCID, Queralta Jorge Peña1ORCID, Westerlund Tomi1ORCID
Affiliation:
1. Turku Intelligent Embedded and Robotic Systems Laboratory, Faculty of Technology, University of Turku, 20500 Turku, Finland
Abstract
Over the last decade, robotic perception algorithms have significantly benefited from the rapid advances in deep learning (DL). Indeed, a significant amount of the autonomy stack of different commercial and research platforms relies on DL for situational awareness, especially vision sensors. This work explored the potential of general-purpose DL perception algorithms, specifically detection and segmentation neural networks, for processing image-like outputs of advanced lidar sensors. Rather than processing the three-dimensional point cloud data, this is, to the best of our knowledge, the first work to focus on low-resolution images with a 360° field of view obtained with lidar sensors by encoding either depth, reflectivity, or near-infrared light in the image pixels. We showed that with adequate preprocessing, general-purpose DL models can process these images, opening the door to their usage in environmental conditions where vision sensors present inherent limitations. We provided both a qualitative and quantitative analysis of the performance of a variety of neural network architectures. We believe that using DL models built for visual cameras offers significant advantages due to their much wider availability and maturity compared to point cloud-based perception.
Funder
Secure Systems Research Center (SSRC), Technology Innovation Institute
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference46 articles.
1. Fan, R., Jiao, J., Ye, H., Yu, Y., Pitas, I., and Liu, M. (2019). Key ingredients of self-driving cars. arXiv. 2. Kato, S., Tokunaga, S., Maruyama, Y., Maeda, S., Hirabayashi, M., Kitsukawa, Y., Monrroy, A., Ando, T., Fujii, Y., and Azumi, T. (2018, January 11–13). Autoware on board: Enabling autonomous vehicles with embedded systems. Proceedings of the 2018 ACM/IEEE 9th International Conference on Cyber-Physical Systems (ICCPS), Porto, Portugal. 3. Large-scale Autonomous Flight with Real-time Semantic SLAM under Dense Forest Canopy;Liu;IEEE Robot. Autom. Lett. (RA-L),2022 4. Review of LiDAR sensor data acquisition and compression for automotive applications;Maksymova;Multidiscip. Digit. Publ. Inst. Proc.,2018 5. Yoo, J.H., Kim, Y., Kim, J., and Choi, J.W. (2020, January 23–28). 3d-cvf: Generating joint camera and lidar features using cross-view spatial feature fusion for 3d object detection. Proceedings of the European Conference on Computer Vision, Glasgow, UK.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|