Abstract
<p>Autonomous driving requires accurate, robust, and fast decision-making perception systems to understand the driving environment. Object detection is critical in allowing the perception system to understand the environment. The perception systems, especially 2D object detection and classification, have succeeded because of the emergence of deep learning (DL) in computer vision (CV) applications. However, 2D object detection lacks depth information, which is crucial to understanding the driving environment. Therefore, 3D object detection is fundamental for the perception system of autonomous driving and robotics applications to estimate the objects’ location and understand the driving environment. The CV community has been giving much attention recently to 3D object detection because of the growth of DL models and the need to know accurate locations of objects. However, 3D object detection is still challenging because of scale changes, the lack of 3D sensor information, and occlusions. Researchers have been using multiple sensors to solve these problems and further enhance the performance of the perception system. This survey presents the multisensor (camera, radar, and LiDAR) fusion-based 3D object detection methods. The fully autonomous vehicles need to be equipped with multiple sensors for robust and reliable driving. Camera, LiDAR, and radar sensors and their corresponding advantages and disadvantages are also presented. Then, relevant datasets are summarized, and state-of-the-art multisensor fusion-based methods are reviewed. Finally, challenges, open issues, and possible research directions are presented.</p>
Publisher
Institute of Electrical and Electronics Engineers (IEEE)
Cited by
8 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献