Affiliation:
1. The School of Information Technology, Luoyang Normal University, Luoyang, China
2. The Faculty of Informatics and Management, Center for Basic and Applied Research, University of Hradec Kralove, Hradec Kralove, Czech Republic
3. The School of Computer Science, Sichuan University, Chengdu, China
Abstract
Lane lines are frequently interrupted in autonomous driving environments because of some objective conditions, such as occlusion or congestion, which often lead to the decreased detection performance of a model. Current detection methods relying on spatial information struggle to detect complete lane lines in such conditions. In this paper, we build a robust lane detection model by fusing spatiotemporal information and dilated convolution. The proposed model is aided by the dilated convolution, which expands the scope of convolutional processes to extract more lane feature information from various perception environments. Convolutional gate recurrent units (ConvGRUs) are employed at the high-level semantic phase to aid the proposed model to get more effective lane feature information by dealing with the spatiotemporal information of consecutive frames. Compared with models FCN, DeepLabv3, RefineNet, SCNN, Cheng-DET, LDNet, SegNet, SegNet-Ego-Lane, Res18, Res34, ResNet-18-SAD, ResNet-34-SAD, ENet-SAD, ReNet-101, R-18-E2E, R-34-E2E, R-101-SAD, R-101-E2E, ResNet34-Qin, LaneNet, PINET(64x32), UNet_ConvLSTMSegNet_ConvLSTM, LDSTNet, extensive experiments on three well-known lane detection benchmarks prove the usefulness of the proposed model, achieving robust results and competitive performance.
Funder
National Natural Science Foundation of China
National Key R&D Program of China
The Science and Technology Project of Sichuan Province
The Key scientific research projects in higher education institutions in Henan Province
The Science and Technology Program of Sichuan Province