Abstract
AbstractThe recent progress in the development of measurement systems for autonomous recognition had a substantial impact on emerging technology in numerous fields, especially robotics and automotive applications. In particular, time-of-flight (TOF) based light detection and ranging (LiDAR) systems enable to map the surrounding environmental information over long distances and with high accuracy. The combination of advanced LiDAR with an artificial intelligence platform allows enhanced object recognition and classification, which however still suffers from limitations of inaccuracy and misidentification. Recently, multi-spectral LiDAR systems have been employed to increase the object recognition performance by additionally providing material information in the short-wave infrared (SWIR) range where the reflection spectrum characteristics are typically very sensitive to material properties. However, previous multi-spectral LiDAR systems utilized band-pass filters or complex dispersive optical systems and even required multiple photodetectors, adding complexity and cost. In this work, we propose a time-division-multiplexing (TDM) based multi-spectral LiDAR system for semantic object inference by the simultaneous acquisition of spatial and spectral information. By utilizing the TDM method, we enable the simultaneous acquisition of spatial and spectral information as well as a TOF based distance map with minimized optical loss using only a single photodetector. Our LiDAR system utilizes nanosecond pulses of five different wavelengths in the SWIR range to acquire sufficient material information in addition to 3D spatial information. To demonstrate the recognition performance, we map the multi-spectral image from a human hand, a mannequin hand, a fabric gloved hand, a nitrile gloved hand, and a printed human hand onto an RGB-color encoded image, which clearly visualizes spectral differences as RGB color depending on the material while having a similar shape. Additionally, the classification performance of the multi-spectral image is demonstrated with a convolution neural network (CNN) model using the full multi-spectral data set. Our work presents a compact novel spectroscopic LiDAR system, which provides increased recognition performance and thus a great potential to improve safety and reliability in autonomous driving.
Publisher
Springer Science and Business Media LLC
Reference42 articles.
1. Lindner, T., Wyrwał, D. & Milecki, A. An autonomous humanoid robot designed to assist a human with a gesture recognition system. Electronics 12, 2652 (2023).
2. Podgorelec, D. et al. LiDAR-based maintenance of a safe distance between a human and a robot arm. Sensors 23, 4305 (2023).
3. Eitel, A., Springenberg, J. T., Spinello, L., Riedmiller, M. & Burgard, W. in 2015 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 681–687 (IEEE).
4. Kim, P., Chen, J. & Cho, Y. K. SLAM-driven robotic mapping and registration of 3D point clouds. Autom. Construct. 89, 38–48 (2018).
5. Li, Y. et al. A deep learning-based hybrid framework for object detection and recognition in autonomous driving. IEEE Access 8, 194228–194239 (2020).
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献