Snow-CLOCs: Camera-LiDAR Object Candidate Fusion for 3D Object Detection in Snowy Conditions
Author:
Fan Xiangsuo12ORCID, Xiao Dachuan1, Li Qi13, Gong Rui1
Affiliation:
1. School of Automation, Guangxi University of Science and Technology, Liuzhou 545006, China 2. Guangxi Collaborative Innovation Centre for Earthmoving Machinery, Guangxi University of Science and Technology, Liuzhou 545006, China 3. Key Laboratory of Disaster Prevention & Mitigation and Prestress Technology of Guangxi Colleges and Universities, Liuzhou 545006, China
Abstract
Although existing 3D object-detection methods have achieved promising results on conventional datasets, it is still challenging to detect objects in data collected under adverse weather conditions. Data distortion from LiDAR and cameras in such conditions leads to poor performance of traditional single-sensor detection methods. Multi-modal data-fusion methods struggle with data distortion and low alignment accuracy, making accurate target detection difficult. To address this, we propose a multi-modal object-detection algorithm, Snow-CLOCs, specifically for snowy conditions. In image detection, we improved the YOLOv5 algorithm by integrating the InceptionNeXt network to enhance feature extraction and using the Wise-IoU algorithm to reduce dependency on high-quality data. For LiDAR point-cloud detection, we built upon the SECOND algorithm and employed the DROR filter to remove noise, enhancing detection accuracy. We combined the detection results from the camera and LiDAR into a unified detection set, represented using a sparse tensor, and extracted features through a 2D convolutional neural network to achieve object detection and localization. Snow-CLOCs achieved a detection accuracy of 86.61% for vehicle detection in snowy conditions.
Funder
Guangxi Science and Technology Major Project
Reference42 articles.
1. Jocher, G., Chaurasia, A., Stoken, A., Borovec, J., Kwon, Y., Michael, K., Fang, J., Yifu, Z., Wong, C., and Montes, D. (2022). ultralytics/yolov5: v7. 0-yolov5 sota realtime instance segmentation. Zenodo. 2. Wang, C.Y., Yeh, I.H., and Liao, H.Y.M. (2024). YOLOv9: Learning What You Want to Learn Using Programmable Gradient Information. arXiv. 3. Wang, C.Y., Bochkovskiy, A., and Liao, H.Y.M. (2023, January 17–24). YOLOv7: Trainable bag-of-freebies sets new state-of-the-art for real-time object detectors. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, Vancouver, BC, Canada. 4. Girshick, R. (2015, January 7–13). Fast r-cnn. Proceedings of the IEEE International Conference on Computer Vision, Santiago, Chile. 5. Qi, C.R., Su, H., Mo, K., and Guibas, L.J. (2017, January 21–26). Pointnet: Deep learning on point sets for 3d classification and segmentation. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Honolulu, HI, USA.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|