Author:
Xie Guotao,Zhang Jing,Tang Junfeng,Zhao Hongfei,Sun Ning,Hu Manjiang
Abstract
Purpose
To the industrial application of intelligent and connected vehicles (ICVs), the robustness and accuracy of environmental perception are critical in challenging conditions. However, the accuracy of perception is closely related to the performance of sensors configured on the vehicle. To enhance sensors’ performance further to improve the accuracy of environmental perception, this paper aims to introduce an obstacle detection method based on the depth fusion of lidar and radar in challenging conditions, which could reduce the false rate resulting from sensors’ misdetection.
Design/methodology/approach
Firstly, a multi-layer self-calibration method is proposed based on the spatial and temporal relationships. Next, a depth fusion model is proposed to improve the performance of obstacle detection in challenging conditions. Finally, the study tests are carried out in challenging conditions, including straight unstructured road, unstructured road with rough surface and unstructured road with heavy dust or mist.
Findings
The experimental tests in challenging conditions demonstrate that the depth fusion model, comparing with the use of a single sensor, can filter out the false alarm of radar and point clouds of dust or mist received by lidar. So, the accuracy of objects detection is also improved under challenging conditions.
Originality/value
A multi-layer self-calibration method is conducive to improve the accuracy of the calibration and reduce the workload of manual calibration. Next, a depth fusion model based on lidar and radar can effectively get high precision by way of filtering out the false alarm of radar and point clouds of dust or mist received by lidar, which could improve ICVs’ performance in challenging conditions.
Subject
Industrial and Manufacturing Engineering,Computer Science Applications,Control and Systems Engineering
Reference18 articles.
1. Object existence probability fusion using Dempster-Shafer theory in a high-level sensor data fusion architecture,2011
2. Seeing through fog without seeing fog: deep multimodal sensor fusion in unseen adverse weather,2019
3. Deep learning for image and point cloud fusion in autonomous driving: a review”, arXiv preprint arXiv:2004.05224,2020
4. Obstacle detection and tracking for the urban challenge;IEEE Transactions on Intelligent Transportation Systems,2009
5. An automotive radar system for multiple-vehicle detection and tracking in urban environments;IET Intelligent Transport Systems,2018
Cited by
9 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献