Abstract
Semantic segmentation of very-high-resolution (VHR) remote sensing images plays an important role in the intelligent interpretation of remote sensing since it predicts pixel-level labels to the images. Although many semantic segmentation methods of VHR remote sensing images have emerged recently and achieved good results, it is still a challenging task because the objects of VHR remote sensing images show large intra-class and small inter-class variations, and their size varies in a large range. Therefore, we proposed a novel semantic segmentation framework for VHR remote sensing images, called Positioning Guidance Network (PGNet), which consists of the feature extractor, a positioning guiding module (PGM), and a self-multiscale collection module (SMCM). First, the PGM can extract long-range dependence and global context information with the help of the transformer architecture and effectively transfer them to each pyramid-level feature, thus effectively improving the segmentation effectiveness between different semantic objects. Secondly, the SMCM we designed can effectively extract multi-scale information and generate high-resolution feature maps with high-level semantic information, thus helping to segment objects in small and varying sizes. Without bells and whistles, the mIoU scores of the proposed PGNet on the iSAID dataset and ISPRS Vaihingn dataset are 1.49% and 2.40% higher than FactSeg, respectively.
Funder
National Key Research and Development Project
National Natural Science Foundation of China
Subject
General Earth and Planetary Sciences
Cited by
11 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献