Combining Low-Light Scene Enhancement for Fast and Accurate Lane Detection
Author:
Ke Changshuo1, Xu Zhijie1, Zhang Jianqin2, Zhang Dongmei1
Affiliation:
1. School of Science, Beijing University of Civil Engineering and Architecture, Beijing 102616, China 2. School of Geomatics and Urban Spatial Informatics, Beijing University of Civil Engineering and Architecture, Beijing 102616, China
Abstract
Lane detection is a crucial task in the field of autonomous driving, as it enables vehicles to safely navigate on the road by interpreting the high-level semantics of traffic signs. Unfortunately, lane detection is a challenging problem due to factors such as low-light conditions, occlusions, and lane line blurring. These factors increase the perplexity and indeterminacy of the lane features, making them hard to distinguish and segment. To tackle these challenges, we propose a method called low-light enhancement fast lane detection (LLFLD) that integrates the automatic low-light scene enhancement network (ALLE) with the lane detection network to improve lane detection performance under low-light conditions. Specifically, we first utilize the ALLE network to enhance the input image’s brightness and contrast while reducing excessive noise and color distortion. Then, we introduce symmetric feature flipping module (SFFM) and channel fusion self-attention mechanism (CFSAT) to the model, which refine the low-level features and utilize more abundant global contextual information, respectively. Moreover, we devise a novel structural loss function that leverages the inherent prior geometric constraints of lanes to optimize the detection results. We evaluate our method on the CULane dataset, a public benchmark for lane detection in various lighting conditions. Our experiments show that our approach surpasses other state of the arts in both daytime and nighttime settings, especially in low-light scenarios.
Funder
Beijing Natural Science Foundation Beijing University of Civil Engineering and Architecture Graduate Student Innovation Project
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference36 articles.
1. Forster, M., Frank, R., Gerla, M., and Engel, T. (May, January 27). A cooperative advanced driver assistance system to mitigate vehicular traffic shock waves. Proceedings of the IEEE INFOCOM 2014-IEEE Conference on Computer Communications, Toronto, ON, Canada. 2. Throngnumchai, K., Nishiuchi, H., Matsuno, Y., and Satoh, H. (2013). Application of Background Light Elimination Technique for Lane Marker Detection, SAE Technical Paper. 3. Pan, X., Shi, J., Luo, P., Wang, X., and Tang, X. (2018, January 2–3). Spatial as deep: Spatial cnn for traffic scene understanding. Proceedings of the AAAI Conference on Artificial Intelligence, New Orleans, LA, USA. 4. Hou, Y., Ma, Z., Liu, C., and Loy, C.C. (November, January 27). Learning lightweight lane detection cnns by self attention distillation. Proceedings of the IEEE/CVF International Conference on Computer Vision, Seoul, Republic of Korea. 5. Qin, Z., Wang, H., and Li, X. (2020, January 23–28). Ultra fast structure-aware deep lane detection. Proceedings of the Computer Vision–ECCV 2020: 16th European Conference, Glasgow, UK.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|