Radar-Camera Fusion Network for Depth Estimation in Structured Driving Scenes
Author:
Li Shuguang1, Yan Jiafu2, Chen Haoran1, Zheng Ke1
Affiliation:
1. School of Automation Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China 2. School of Mechanical and Electrical Engineering, University of Electronic Science and Technology of China, Chengdu 611731, China
Abstract
Depth estimation is an important part of the perception system in autonomous driving. Current studies often reconstruct dense depth maps from RGB images and sparse depth maps obtained from other sensors. However, existing methods often pay insufficient attention to latent semantic information. Considering the highly structured characteristics of driving scenes, we propose a dual-branch network to predict dense depth maps by fusing radar and RGB images. The driving scene is divided into three parts in the proposed architecture, each predicting a depth map, which is finally merged into one by implementing the fusion strategy in order to make full use of the potential semantic information in the driving scene. In addition, a variant L1 loss function is applied in the training phase, directing the network to focus more on those areas of interest when driving. Our proposed method is evaluated on the nuScenes dataset. Experiments demonstrate its effectiveness in comparison with previous state of the art methods.
Funder
National Key Research and Development Program of China Key R&D Projects of Science & Technology Department of the Sichuan Province of China
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference47 articles.
1. A survey on attack detection and resilience for connected and automated vehicles: From vehicle dynamics and control perspective;Ju;IEEE Trans. Intell. Veh.,2022 2. Peng, X., Zhu, X., Wang, T., and Ma, Y. (2022, January 4–8). SIDE: Center-based stereo 3D detector with structure-aware instance depth estimation. Proceedings of the IEEE/CVF Winter Conference on Applications of Computer Vision, Waikoloa, HI, USA. 3. Li, Y., Ge, Z., Yu, G., Yang, J., Wang, Z., Shi, Y., Sun, J., and Li, Z. (2023, January 13–14). Bevdepth: Acquisition of reliable depth for multi-view 3d object detection. Proceedings of the AAAI Conference on Artificial Intelligence, Washington, DC, USA. 4. Deep Learning-Based Image 3-D Object Detection for Autonomous Driving;Alaba;IEEE Sens. J.,2023 5. Wei, R., Li, B., Mo, H., Zhong, F., Long, Y., Dou, Q., Liu, Y.H., and Sun, D. (2022, January 23–27). Distilled Visual and Robot Kinematics Embeddings for Metric Depth Estimation in Monocular Scene Reconstruction. Proceedings of the 2022 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Kyoto, Japan.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|