Abstract
With the rapid development of intelligent connected vehicles, there is an increasing demand for hardware facilities and onboard systems of driver assistance systems. Currently, most vehicles are constrained by the hardware resources of onboard systems, which mainly process single-task and single-sensor data. This poses a significant challenge in achieving complex panoramic driving perception technology. While the panoramic driving perception algorithm YOLOP has achieved outstanding performance in multi-task processing, it suffers from poor adaptability of feature map pooling operations and loss of details during downsampling. To address these issues, this paper proposes a panoramic driving perception fusion algorithm based on multi-task learning. The model training involves the introduction of different loss functions and a series of processing steps for lidar point cloud data. Subsequently, the perception information from lidar and vision sensors is fused to achieve synchronized processing of multi-task and multi-sensor data, thereby effectively improving the performance and reliability of the panoramic driving perception system. To evaluate the performance of the proposed algorithm in multi-task processing, the BDD100K dataset is used. The results demonstrate that, compared to the YOLOP model, the multi-task learning network performs better in lane detection, drivable area detection, and vehicle detection tasks. Specifically, the lane detection accuracy improves by 11.6%, the mean Intersection over Union (mIoU) for drivable area detection increases by 2.1%, and the mean Average Precision at 50% IoU (mAP50) for vehicle detection improves by 3.7%.
Funder
National Natural Science Foundation of China
Guangxi Science and Technology Base and Talent Project
Guangxi Key Laboratory of Machine Vision and Intelligent Control
Guangxi Minzu University Graduate Innovation Program
Publisher
Public Library of Science (PLoS)
Reference48 articles.
1. Keep your eyes on the lane: Real-time attention-guided lane detection;L. Tabelini;In Proceedings of the IEEE/CVF conference on computer vision and pattern recognition,2021
2. Fast Lane Detection Based on Improved Enet for Driverless Cars.;B. Li;Advances in Computational Intelligence Systems: Contributions Presented at the 20th UK Workshop on Computational Intelligence, September 8–10, 2021, Aberystwyth, Wales, UK 20,2022
3. Latr: 3d lane detection from monocular images with transformer;Y. Luo;In Proceedings of the IEEE/CVF International Conference on Computer Vision,2023
4. Swin-APT: An Enhancing Swin-Transformer Adaptor for Intelligent Transportation.;Y. Liu;Applied Sciences,2023
5. An improved Deeplabv3+ semantic segmentation algorithm with multiple loss constraints;Y. Wang;Plos one,2022