Combining Images and Trajectories Data to Automatically Generate Road Networks
-
Published:2023-06-30
Issue:13
Volume:15
Page:3343
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Bai Xiangdong1, Feng Xuyu1, Yin Yuanyuan1, Yang Mingchun2, Wang Xingyao1, Yang Xue13
Affiliation:
1. School of Geography and Information Engineering, China University of Geosciences, Wuhan 430074, China 2. Department of Automotive Engineering, Anhui Automobile Vocational and Technical College, Hefei 230061, China 3. National Engineering Research Center of Geographic Information System, China University of Geosciences, Wuhan 430074, China
Abstract
Road network data are an important part of many applications, e.g., intelligent transportation and urban planning. At present, most of the approaches to road network generation are dominated by single data sources including images, point cloud data, trajectories, etc., which may cause the fragmentation of information. This study proposes a novel strategy to obtain the vector data of road networks by combining images and trajectory data with a postprocessing method named RNITP. The designed RNITP includes two parts: an initial generation layer of road network detection and a postprocessing layer of vector map acquirement. At the first layer, there are three steps of road network detection including road information interpretation from images based on a new deep learning model (denoted as SPBAM-LinkNet), road detection from trajectories data by rasterizing, and road information fusion by using OR operation. The last layer is used to generate a vector map based on a postprocessing method that is focused on error identification and removal. Experiments were conducted using two kinds of datasets: CHN6-CUG road datasets and HB road datasets. The results show that the accuracy, F1 score, and MIoU of SPBAM-LinkNet on CHN6-CUG and HB were (0.9695, 0.7369, 0.7760) and (0.9387, 0.7257, 0.7514), respectively, which are better than other typical models (e.g., Unet, DeepLabv3+, D-Linknet, NL-Linknet). In addition, the F1 score, IoU, and recall of the vector map obtained from RNITP are 0.8883, 0.7991, and 0.9065, respectively.
Funder
National Natural Science Foundation of China College student innovative practice project in China university of Geoscience
Subject
General Earth and Planetary Sciences
Reference64 articles.
1. RoadNet: Learning to Comprehensively Analyze Road Networks in Complex Urban Scenes from High-Resolution Remotely Sensed Images;Liu;IEEE Trans. Geosci. Remote Sens.,2019 2. Máttyus, G., Luo, W., and Urtasun, R. (2017, January 22–29). DeepRoadMapper: Extracting Road Topology from Aerial Images. Proceedings of the 2017 IEEE International Conference on Computer Vision (ICCV), Venice, Italy. 3. Bastani, F., He, S., Abbar, S., Alizadeh, M., Balakrishnan, H., Chawla, S., Madden, S., and DeWitt, D. (2018, January 18–22). Roadtracer: Automatic extraction of road networks from aerial images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA. 4. Liu, L., Yang, Z., Li, G., Wang, K., Chen, T., and Lin, L. (2022). Aerial Images Meet Crowdsourced Trajectories: A New Approach to Robust Road Extraction. IEEE Trans. Neural Netw. Learn. Syst., 1–15. 5. Dai, J., Zhu, T., Zhang, Y., Ma, R., and Li, W. (2019). Lane-level road extraction from high-resolution optical satellite images. Remote Sens., 11.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|