Author:
Wei Ming,Zhu Ming,Zhang Yaoyuan,Wang Jiarong,Sun Jiaqi
Abstract
The integration of multiple sensors is a crucial and emerging trend in the development of autonomous driving technology. The depth image obtained by stereo matching of the binocular camera is easily influenced by environment and distance. The point cloud of LiDAR has strong penetrability. However, it is much sparser than binocular images. LiDAR-stereo fusion can neutralize the advantages of the two sensors and maximize the acquisition of reliable three-dimensional information to improve the safety of automatic driving. Cross-sensor fusion is a key issue in the development of autonomous driving technology. This study proposed a real-time LiDAR-stereo depth completion network without 3D convolution to fuse point clouds and binocular images using injection guidance. At the same time, a kernel-connected spatial propagation network was utilized to refine the depth. The output of dense 3D information is more accurate for autonomous driving. Experimental results on the KITTI dataset showed that our method used real-time techniques effectively. Further, we demonstrated our solution's ability to address sensor defects and challenging environmental conditions using the p-KITTI dataset.
Subject
Artificial Intelligence,Biomedical Engineering
Reference41 articles.
1. Integrating LIDAR into Stereo for Fast and Improved Disparity Computation;Badino;2011 International Conference on 3D Imaging, Modeling, Processing, Visualization and Transmission.,2011
2. Estimating Depth from RGB and Sparse Sensing;Chen,2018
3. CSPN++: learning context and resource aware convolutional spatial propagation networks for depth completion;Cheng;ArXiv
4. Learning depth with convolutional spatial propagation network;Cheng;IEEE Transactions on Pattern Analysis and Machine Intelligence,2018
5. Noise-Aware Unsupervised Deep Lidar-Stereo Fusion;Cheng