Multi-Layer Fusion 3D Object Detection via Lidar Point Cloud and Camera Image
-
Published:2024-02-06
Issue:4
Volume:14
Page:1348
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Affiliation:
1. College of Information Engineering, East China Jiaotong University, Nanchang 330013, China
Abstract
Object detection is a key task in automatic driving, and the poor performance of small object detection is a challenge that needs to be overcome. Previously, object detection networks could detect large-scale objects in ideal environments, but detecting small objects was very difficult. To address this problem, we propose a multi-layer fusion 3D object detection network. First, a dense fusion (D-fusion) method is proposed, which is different from the traditional fusion method. By fusing the feature maps of each layer, more semantic information of the fusion network can be preserved. Secondly, in order to preserve small objects at the feature map level, we designed a feature extractor with an adaptive fusion module (AFM), which reduces the impact of the background on small objects by weighting and fusing different feature layers. Finally, an attention mechanism was added to the feature extractor to accelerate the training efficiency and convergence speed of the network by suppressing information that is irrelevant to the task. The experimental results show that our proposed approach greatly improves the baseline and outperforms most state-of-the-art methods on KITTI object detection benchmarks.
Funder
National Natural Science Foundation of China
Reference35 articles.
1. Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., and Belongie, S. (2017, January 21–26). Feature pyramid networks for object detection. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA. 2. Chen, X., Kundu, K., Zhang, Z., Ma, H., Fidler, S., and Urtasun, R. (2016, January 27–30). Monocular 3D Object Detection for Autonomous Driving. Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Las Vegas, NV, USA. 3. Qin, Z., Wang, J., and Lu, Y. (February, January 27). MonoGRNet: A Geometric Reasoning Network for Monocular 3D Object Localization. Proceedings of the AAAI Conference on Artificial Intelligence, Hilton, HI, USA. 4. Chabot, F., Chaouch, M., Rabarisoa, J., Teuliere, C., and Chateau, T. (2017, January 21–26). Deep manta: A coarse-to-fine many-task network for joint 2d and 3d vehicle analysis from monocular image. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA. 5. Liu, Z., Wu, Z., and Tóth, R. (2020, January 14–19). Smoke: Single-stage monocular 3d object detection via keypoint estimation. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Seattle, WA, USA.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|