Lane Line Type Recognition Based on Improved YOLOv5
-
Published:2023-09-21
Issue:18
Volume:13
Page:10537
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Liu Boyu1, Wang Hao1, Wang Yongqiang1, Zhou Congling1, Cai Lei1
Affiliation:
1. School of Mechanical Engineering, Tianjin University of Science and Technology, Tianjin 300222, China
Abstract
The recognition of lane line type plays an important role in the perception of advanced driver assistance systems (ADAS). In actual vehicle driving on roads, there are a variety of lane line type and complex road conditions which present significant challenges to ADAS. To address this problem, this paper proposes an improved YOLOv5 method for recognising lane line type. This method can accurately and quickly identify the types of lane lines and can show good recognition results in harsh environments. The main strategy of this method includes the following steps: first, the FasterNet lightweight network is introduced into all the concentrated-comprehensive convolution (C3) modules in the network to accelerate the inference speed and reduce the number of parameters. Then, the efficient channel attention (ECA) mechanism is integrated into the backbone network to extract image feature information and improve the model’s detection accuracy. Finally, the sigmoid intersection over union (SIoU) loss function is used to replace the original generalised intersection over union (GIoU) loss function to further enhance the robustness of the model. Through experiments, the improved YOLOv5s algorithm achieves 95.1% of mAP@0.5 and 95.2 frame·s−1 of FPS, which can satisfy the demand of ADAS for accuracy and real-time performance. And the number of model parameters are only 6M, and the volume is only 11.7 MB, which will be easily embedded into ADAS and does not require huge computing power to support it. Meanwhile, the improved algorithms increase the accuracy and speed of YOLOv5m, YOLOv5l, and YOLOv5x models to different degrees. The appropriate model can be selected according to the actual situation. This plays a practical role in improving the safety of ADAS.
Funder
Beijing Smarter Eye Technology Co., Ltd.
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference40 articles.
1. Wippelhauser, A., Edelmayer, A., and Bokor, L. (2023). A Declarative Application Framework for Evaluating Advanced V2X-Based ADAS Solutions. Appl. Sci., 13. 2. Zou, Y., Ding, L., Zhang, H., Zhu, T., and Wu, L. (2022). Vehicle Acceleration Prediction Based on Machine Learning Models and Driving Behavior Analysis. Appl. Sci., 12. 3. Ulrich, L., Nonis, F., Vezzetti, E., Moos, S., Caruso, G., Shi, Y., and Marcolin, F. (2021). Can ADAS Distract Driver’s Attention? An RGB-D Camera and Deep Learning-Based Analysis. Appl. Sci., 11. 4. Park, C., Chung, S., and Lee, H. (2020). Vehicle-in-the-Loop in Global Coordinates for Advanced Driver Assistance System. Appl. Sci., 10. 5. Ma, C., and Xie, M. (2010, January 9–10). A Method for Lane Detection Based on Color Clustering. Proceedings of the 2010 Third International Conference on Knowledge Discovery and Data Mining, Phuket, Thailand.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|