Range-Intensity-Profile-Guided Gated Light Ranging and Imaging Based on a Convolutional Neural Network
Author:
Xia Chenhao12, Wang Xinwei123, Sun Liang1, Zhang Yue1ORCID, Song Bo12, Zhou Yan123
Affiliation:
1. Optoelectronic System Laboratory, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China 2. Center of Materials Science and Optoelectronics Engineering, University of Chinese Academy of Sciences, Beijing 100049, China 3. School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing 100049, China
Abstract
Three-dimensional (3D) range-gated imaging can obtain high spatial resolution intensity images as well as pixel-wise depth information. Several algorithms have been developed to recover depth from gated images such as the range-intensity correlation algorithm and deep-learning-based algorithm. The traditional range-intensity correlation algorithm requires specific range-intensity profiles, which are hard to generate, while the existing deep-learning-based algorithm requires large number of real-scene training data. In this work, we propose a method of range-intensity-profile-guided gated light ranging and imaging to recover depth from gated images based on a convolutional neural network. In this method, the range-intensity profile (RIP) of a given gated light ranging and imaging system is obtained to generate synthetic training data from Grand Theft Auto V for our range-intensity ratio and semantic network (RIRS-net). The RIRS-net is mainly trained on synthetic data and fine-tuned with RIP data. The network learns both semantic depth cues and range-intensity depth cues in the synthetic data, and learns accurate range-intensity depth cues in the RIP data. In the evaluation experiments on both a real-scene and synthetic test dataset, our method shows a better result compared to other algorithms.
Funder
Beijing Municipal Natural Science Foundation Key Research Project National Key Research and Development Program of China National Natural Science Foundation of China Youth Innovation Promotion Association of the Chinese Academy of Sciences
Reference33 articles.
1. A Survey on Deep Learning Techniques for Stereo-Based Depth Estimation;Laga;IEEE Trans. Pattern Anal. Mach. Intell.,2020 2. Woo, S., Park, J., Lee, J.-Y., and Kweon, I.S. (2018, January 8–14). Cbam: Convolutional Block Attention Module. Proceedings of the European Conference on Computer Vision (ECCV), Munich, Germany. 3. Attention Mechanisms in Computer Vision: A Survey;Guo;Comput. Vis. Media,2022 4. Gruber, T., Julca-Aguilar, F., Bijelic, M., and Heide, F. (November, January 27). Gated2depth: Real-Time Dense Lidar from Gated Images. Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea. 5. Godard, C., Mac Aodha, O., Firman, M., and Brostow, G.J. (November, January 27). Digging into Self-Supervised Monocular Depth Estimation. Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. 激光距离选通三维成像技术研究进展(特邀);Infrared and Laser Engineering;2024
|
|