Global–Local Information Fusion Network for Road Extraction: Bridging the Gap in Accurate Road Segmentation in China
-
Published:2023-09-25
Issue:19
Volume:15
Page:4686
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Wang Xudong1ORCID, Cai Yujie1, He Kang2, Wang Sheng12, Liu Yan3, Dong Yusen124
Affiliation:
1. School of Computer Science, China University of Geosciences, Wuhan 430078, China 2. Hubei Key Laboratory of Geological Survey and Evaluation of Ministry of Education, China University of Geosciences, Wuhan 430078, China 3. State Key Laboratory of Geological Processes and Mineral Resources, China University of Geosciences, Wuhan 430078, China 4. Key Laboratory of Intelligent Geo-Information Processing, China University of Geosciences (Wuhan), Wuhan 430078, China
Abstract
Road extraction is crucial in urban planning, rescue operations, and military applications. Compared to traditional methods, using deep learning for road extraction from remote sensing images has demonstrated unique advantages. However, previous convolutional neural networks (CNN)-based road extraction methods have had limited receptivity and failed to effectively capture long-distance road features. On the other hand, transformer-based methods have good global information-capturing capabilities, but face challenges in extracting road edge information. Additionally, existing excellent road extraction methods lack validation for the Chinese region. To address these issues, this paper proposes a novel road extraction model called the global–local information fusion network (GLNet). In this model, the global information extraction (GIE) module effectively integrates global contextual relationships, the local information extraction (LIE) module accurately captures road edge information, and the information fusion (IF) module combines the output features from both global and local branches to generate the final extraction results. Further, a series of experiments on two different Chinese road datasets with geographic robustness demonstrate that our model outperforms the state-of-the-art deep learning models for road extraction tasks in China. On the CHN6-CUG dataset, the overall accuracy (OA) and intersection over union (IoU) reach 97.49% and 63.27%, respectively, while on the RDCME dataset, OA and IoU reach 98.73% and 84.97%, respectively. These research results hold significant implications for road traffic, humanitarian rescue, and environmental monitoring, particularly in the context of the Chinese region.
Funder
Geological Survey of China National Natural Science Foundation of China Opening Fund of the Key Laboratory of Geological Survey and Evaluation of the Ministry of Education
Subject
General Earth and Planetary Sciences
Reference71 articles.
1. Simultaneous road surface and centerline extraction from large-scale remote sensing images using CNN-based segmentation and tracing;Wei;IEEE Trans. Geosci. Remote Sens.,2020 2. Yang, F., Wang, H., and Jin, Z. (2020). A fusion network for road detection via spatial propagation and spatial transformation. Pattern Recognit., 100. 3. A review of motion planning for highway autonomous driving;Claussmann;IEEE Trans. Intell. Transp. Syst.,2019 4. Bonafilia, D., Gill, J., Basu, S., and Yang, D. (2019, January 16–17). Building high resolution maps for humanitarian aid and development with weakly-and semi-supervised learning. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, Long Beach, CA, USA. 5. He, K., Dong, Y., Han, W., and Zhang, Z. (2023). An assessment on the off-road trafficability using a quantitative rule method with geographical and geological data. Comput. Geosci., 177.
|
|