Three-Dimensional Point Cloud Object Detection Based on Feature Fusion and Enhancement
-
Published:2024-03-15
Issue:6
Volume:16
Page:1045
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Li Yangyang1, Ou Zejun1, Liu Guangyuan1ORCID, Yang Zichen1, Chen Yanqiao2, Shang Ronghua1, Jiao Licheng1
Affiliation:
1. Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, Joint International Research Laboratory of Intelligent Perception and Computation, International Research Center for Intelligent Perception and Computation, Collaborative Innovation Center of Quantum Information of Shaanxi Province, School of Artificial Intelligence, Xidian University, Xi’an 710071, China 2. The 54th Research Institute of China Electronics Technology Group Corporation, Shijiazhuang 050081, China
Abstract
With the continuous emergence and development of 3D sensors in recent years, it has become increasingly convenient to collect point cloud data for 3D object detection tasks, such as the field of autonomous driving. But when using these existing methods, there are two problems that cannot be ignored: (1) The bird’s eye view (BEV) is a widely used method in 3D objective detection; however, the BEV usually compresses dimensions by combined height, dimension, and channels, which makes the process of feature extraction in feature fusion more difficult. (2) Light detection and ranging (LiDAR) has a much larger effective scanning depth, which causes the sector to become sparse in deep space and the uneven distribution of point cloud data. This results in few features in the distribution of neighboring points around the key points of interest. The following is the solution proposed in this paper: (1) This paper proposes multi-scale feature fusion composed of feature maps at different levels made of Deep Layer Aggregation (DLA) and a feature fusion module for the BEV. (2) A point completion network is used to improve the prediction results by completing the feature points inside the candidate boxes in the second stage, thereby strengthening their position features. Supervised contrastive learning is applied to enhance the segmentation results, improving the discrimination capability between the foreground and background. Experiments show these new additions can achieve improvements of 2.7%, 2.4%, and 2.5%, respectively, on KITTI easy, moderate, and hard tasks. Further ablation experiments show that each addition has promising improvement over the baseline.
Funder
National Natural Science Foundation of China under Grants Research Project of SongShan Laboratory Natural Science Basic Research Program of Shaanxi Fund for Foreign Scholars in University Research and Teaching Programs
Reference40 articles.
1. A Survey on 3D Object Detection Methods for Autonomous Driving Applications;Arnold;IEEE Trans. Intell. Transport. Syst.,2019 2. Ku, J., Mozifian, M., Lee, J., Harakeh, A., and Waslander, S.L. (2018, January 1–5). Joint 3D Proposal Generation and Object Detection from View Aggregation. Proceedings of the 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Madrid, Spain. 3. RT3D: Real-Time 3-D Vehicle Detection in LiDAR Point Cloud for Autonomous Driving;Zeng;IEEE Robot. Autom. Lett.,2018 4. Khanh, T.T., Hoang Hai, T., Nguyen, V., Nguyen, T.D.T., Thien Thu, N., and Huh, E.-N. (2020, January 3–5). The Practice of Cloud-Based Navigation System for Indoor Robot. Proceedings of the 2020 14th International Conference on Ubiquitous Information Management and Communication (IMCOM), Taichung, Taiwan. 5. Yu, S.-L., Westfechtel, T., Hamada, R., Ohno, K., and Tadokoro, S. (2017, January 11–13). Vehicle Detection and Localization on Bird’s Eye View Elevation Images Using Convolutional Neural Network. Proceedings of the 2017 IEEE International Symposium on Safety, Security and Rescue Robotics (SSRR), Shanghai, China.
|
|