Author:
Zhang Hua,Jiang Zhengang,Zheng Guoxun,Yao Xuekun
Abstract
AbstractSemantic segmentation of high-resolution remote sensing images has emerged as one of the foci of research in the remote sensing field, which can accurately identify objects on the ground and determine their localization. In contrast, the traditional deep learning-based semantic segmentation, on the other hand, requires a large amount of annotated data, which is unsuitable for high-resolution remote sensing tasks with limited resources. It is therefore important to build a semantic segmentation method for high-resolution remote sensing images. In this paper, it is proposed an improved U-Net model based on transfer learning to solve the semantic segmentation problem of high-resolution remote sensing images. The model is based on the symmetric encoder–decoder structure of U-Net. For the encoder, transfer learning is applied and VGG16 is used as the backbone of the feature extraction network, and in the decoder, after upsampling using bilinear interpolation, it is performed multiscale fusion with the feature maps of the corresponding layers of the encoder in turn and is finally obtained the predicted value of each pixel to achieve precise localization. To verify the efficacy of the proposed network, experiments are performed on the ISPRS Vaihingen dataset. The experiments show that the applied method has achieved high-quality semantic segmentation results on the high-resolution remote sensing dataset, and the MIoU is 1.70%, 2.20%, and 2.33% higher on the training, validation, and test sets, respectively, and the IoU is 4.26%, 6.89%, and 5.44% higher for the automotive category compared to the traditional U-Net.
Funder
National Natural Science Foundation of China
Foundation of Jilin Provincial Science and Technology Department
Jilin Province Innovation and Entrepreneurship Talent Funding Project
CCIT Science and Technology Project
Publisher
Springer Science and Business Media LLC
Subject
Computational Mathematics,General Computer Science