A Faster and Lightweight Lane Detection Method in Complex Scenarios
-
Published:2024-06-25
Issue:13
Volume:13
Page:2486
-
ISSN:2079-9292
-
Container-title:Electronics
-
language:en
-
Short-container-title:Electronics
Author:
Nie Shuaiqi1, Zhang Guiheng1, Yun Libo1, Liu Shuxian1
Affiliation:
1. School of Computer Science and Technology, Xinjiang University, Ürümqi 830046, China
Abstract
Lane detection is a crucial visual perception task in the field of autonomous driving, serving as one of the core modules in advanced driver assistance systems (ADASs).To address the insufficient real-time performance of current segmentation-based models and the conflict between the demand for high inference speed and the excessive parameters in resource-constrained edge devices (such as onboard hardware, mobile terminals, etc.) in complex real-world scenarios, this paper proposes an efficient and lightweight auxiliary branch network (CBGA-Auxiliary) to tackle these issues. Firstly, to enhance the model’s capability to extract feature information in complex scenarios, a row anchor-based feature extraction method based on global features was adopted. Secondly, employing ResNet as the backbone network and CBGA (Conv-Bn-GELU-SE Attention) as the fundamental module, we formed the auxiliary segmentation network, significantly enhancing the segmentation training speed of the model. Additionally, we replaced the standard convolutions in the branch network with lightweight GhostConv convolutions. This reduced the parameters and computational complexity while maintaining accuracy. Finally, an additional enhanced structural loss function was introduced to compensate for the structural defect loss issue inherent in the row anchor-based method, further improving the detection accuracy. The model underwent extensive experimentation on the Tusimple dataset and the CULane dataset, which encompass various road scenarios. The experimental results indicate that the model achieved the highest F1 scores of 96.1% and 71.0% on the Tusimple and CULane datasets, respectively. At a resolution of 288 × 800, the ResNet18 and ResNet34 models achieved maximum inference speeds of 410FPS and 280FPS, respectively. Compared to existing SOTA models, it demonstrates a significant advantage in terms of inference speed. The model achieved a good balance between accuracy and inference speed, making it suitable for deployment on edge devices and validates the effectiveness of the model.
Reference35 articles.
1. RS-lane: A robust lane detection method based on ResNeSt and self-attention distillation for challenging traffic situations;Zhang;J. Adv. Transp.,2021 2. Line-CNN: End-to-End Traffic Line Detection With Line Proposal Unit;Li;IEEE Trans. Intell. Transp. Syst.,2019 3. Hou, Y., Ma, Z., Liu, C., and Loy, C.C. (November, January 27). Learning Lightweight Lane Detection CNNs by Self Attention Distillation. Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea. 4. Zheng, T., Huang, Y., Liu, Y., Tang, W., Yang, Z., Cai, D., and He, X. (November, January 27). CLRNet: Cross Layer Refinement Network for Lane Detection. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea. 5. Liu, L., Chen, X., Zhu, S., and Tan, P. (2021, January 10–17). CondLaneNet: A Top-to-down Lane Detection Framework Based on Conditional Convolution. Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada.
|
|