A Fast and Accurate Lane Detection Method Based on Row Anchor and Transformer Structure
Author:
Chai Yuxuan1, Wang Shixian12ORCID, Zhang Zhijia12
Affiliation:
1. School of Artificial Intelligence, Shenyang University of Technology, Shenyang 110870, China 2. Shenyang Key Laboratory of Information Perception and Edge Computing, Shenyang 110870, China
Abstract
Lane detection plays a pivotal role in the successful implementation of Advanced Driver Assistance Systems (ADASs), which are essential for detecting the road’s lane markings and determining the vehicle’s position, thereby influencing subsequent decision making. However, current deep learning-based lane detection methods encounter challenges. Firstly, the on-board hardware limitations necessitate an exceptionally fast prediction speed for the lane detection method. Secondly, improvements are required for effective lane detection in complex scenarios. This paper addresses these issues by enhancing the row-anchor-based lane detection method. The Transformer encoder–decoder structure is leveraged as the row classification enhances the model’s capability to extract global features and detect lane lines in intricate environments. The Feature-aligned Pyramid Network (FaPN) structure serves as an auxiliary branch, complemented by a novel structural loss with expectation loss, further refining the method’s accuracy. The experimental results demonstrate our method’s commendable accuracy and real-time performance, achieving a rapid prediction speed of 129 FPS (the single prediction time of the model on RTX3080 is 15.72 ms) and a 96.16% accuracy on the Tusimple dataset—a 3.32% improvement compared to the baseline method.
Funder
Liaoning Province, China, Applied Basic Research Program in 2023
Reference33 articles.
1. Krizhevsky, A., Sutskever, I., and Hinton, G.E. (2012, January 3–6). ImageNet Classification with Deep Convolutional Neural Networks. Proceedings of the Neural Information Processing Systems, Lake Tahoe, NV, USA. 2. Hou, Y., Ma, Z., Liu, C., and Loy, C.C. (November, January 27). Learning Lightweight Lane Detection CNNs by Self Attention Distillation. Proceedings of the 2019 IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea. 3. Zheng, T., Huang, Y., Liu, Y., Tang, W., Yang, Z., Cai, D., and He, X. (November, January 27). CLRNet: Cross Layer Refinement Network for Lane Detection. Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), Seoul, Republic of Korea. 4. Lin, T.Y., Dollár, P., Girshick, R., He, K., Hariharan, B., and Belongie, S. (2017, January 21–26). Feature Pyramid Networks for Object Detection. Proceedings of the 2017 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA. 5. Liu, L., Chen, X., Zhu, S., and Tan, P. (2021, January 10–17). CondLaneNet: A Top-to-down Lane Detection Framework Based on Conditional Convolution. Proceedings of the 2021 IEEE/CVF International Conference on Computer Vision (ICCV), Montreal, QC, Canada.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|