Affiliation:
1. College of Computer Science and Technology, Nanjing University of Aeronautics and Astronautics, Nanjing 210016, China
Abstract
Autonomous vehicles rely extensively on onboard sensors to perceive their surrounding environments for motion planning and vehicle control. Despite recent advancements, prevalent perception algorithms typically utilize data acquired from the single host vehicle, which can lead to challenges such as sensor data sparsity, field-of-view limitations, and occlusion. To address these issues and enhance the perception capabilities of autonomous driving systems, we explore the concept of multi-vehicle multimedia cooperative perception by investigating the fusion of LiDAR point clouds and camera images from multiple interconnected vehicles with different positions and viewing angles. Specifically, we introduce a semantic point cloud feature-level cooperative perception framework, termed CooPercept, designed to mitigate computing complexity and reduce turnaround time. This is crucial, as the volume of raw sensor data traffic generally far exceeds the bandwidth of existing vehicular networks. Our approach is validated through experiments conducted on synthetic datasets from KITTI and OPV2V. The results demonstrate that our proposed CooPercept model surpasses comparable perception models, achieving enhanced detection accuracy and greater detection robustness.
Funder
National Key Research and Development Program of China
National Natural Science Foundation of China
A3 Foresight Program of NSFC
Key Research and Development Program of Jiangsu Province
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献