FusionRCNN: LiDAR-Camera Fusion for Two-Stage 3D Object Detection
-
Published:2023-03-30
Issue:7
Volume:15
Page:1839
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Xu Xinli1ORCID, Dong Shaocong1ORCID, Xu Tingfa12, Ding Lihe1ORCID, Wang Jie1, Jiang Peng3, Song Liqiang3, Li Jianan1ORCID
Affiliation:
1. Image Engineering & Video Technology Lab, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China 2. Big Data and Artificial Intelligence Laboratory, Beijing Institute of Technology Chongqing Innovation Center(BITCQIC), Chongqing 401135, China 3. National Astronomical Observatories of China, Beijing 100107, China
Abstract
Accurate and reliable perception systems are essential for autonomous driving and robotics. To achieve this, 3D object detection with multi-sensors is necessary. Existing 3D detectors have significantly improved accuracy by adopting a two-stage paradigm that relies solely on LiDAR point clouds for 3D proposal refinement. However, the sparsity of point clouds, particularly for faraway points, makes it difficult for the LiDAR-only refinement module to recognize and locate objects accurately. To address this issue, we propose a novel multi-modality two-stage approach called FusionRCNN. This approach effectively and efficiently fuses point clouds and camera images in the Regions of Interest (RoI). The FusionRCNN adaptively integrates both sparse geometry information from LiDAR and dense texture information from the camera in a unified attention mechanism. Specifically, FusionRCNN first utilizes RoIPooling to obtain an image set with a unified size and gets the point set by sampling raw points within proposals in the RoI extraction step. Then, it leverages an intra-modality self-attention to enhance the domain-specific features, followed by a well-designed cross-attention to fuse the information from two modalities. FusionRCNN is fundamentally plug-and-play and supports different one-stage methods with almost no architectural changes. Extensive experiments on KITTI and Waymo benchmarks demonstrate that our method significantly boosts the performances of popular detectors. Remarkably, FusionRCNN improves the strong SECOND baseline by 6.14% mAP on Waymo and outperforms competing two-stage approaches.
Funder
National Natural Science Foundation of China Postdoctoral Science Foundation of China Beijing Institute of Technology Research Fund Program for Young Scholars
Subject
General Earth and Planetary Sciences
Reference75 articles.
1. Shi, S., Wang, X., and Li, H. (2019, January 15–20). Pointrcnn: 3d object proposal generation and detection from point cloud. Proceedings of the CVPR, Long Beach, CA, USA. 2. Yang, Z., Sun, Y., Liu, S., Shen, X., and Jia, J. (November, January 27). Std: Sparse-to-dense 3d object detector for point cloud. Proceedings of the ICCV, Seoul, Republic of Korea. 3. Li, Z., Wang, F., and Wang, N. (2021, January 19–25). Lidar r-cnn: An efficient and universal 3d object detector. Proceedings of the CVPR, Virtual. 4. Sheng, H., Cai, S., Liu, Y., Deng, B., Huang, J., Hua, X.S., and Zhao, M.J. (2021, January 10). Improving 3d object detection with channel-wise transformer. Proceedings of the ICCV, Virtual. 5. From points to parts: 3d object detection from point cloud with part-aware and part-aggregation network;Shi;IEEE Trans. Pattern Anal. Mach. Intell.,2020
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|