Author:
Favorskaya Margarita,Pakhirka Andrey
Abstract
Currently, technologies for remote sensing image processing are actively developing, including both satellite images and aerial images obtained from video cameras of unmanned aerial vehicles. Often such images have artifacts such as low resolution, blurred image fragments, noise, etc. One way to overcome such limitations is to use modern technologies to restore super-resolution images based on deep learning methods. The specificity of aerial images is the presentation of texture and structural elements in a higher resolution than in satellite images, which objectively contributes to better results of restoration. The article provides a classification of super-resolution methods based on the main architectures of deep neural networks, namely convolutional neural networks, visual transformers and generative adversarial networks. The article proposes a method for reconstructing super-resolution aerial images SemESRGAN taking into account semantic features by using an additional deep network for semantic segmentation during the training stage. The total loss function, including adversarial losses, pixel-level losses, and perception losses (feature similarity), is minimized. Six annotated aerial and satellite image datasets CLCD, DOTA, LEVIR-CD, UAVid, AAD, and AID were used for the experiments. The results of image restoration using the proposed SemESRGAN method were compared with the basic architectures of convolutional neural networks, visual transformers and generative adversarial networks. Comparative results of image restoration were obtained using objective metrics PSNR and SSIM, which made it possible to evaluate the quality of restoration using various deep network models.
Reference36 articles.
1. Фаворская М.Н. Аналитическое исследование моделей глубокого обучения для создания снимков ДЗЗ сверхвысокого разрешения // Обработка пространственных данных в задачах мониторинга природных и антропогенных процессов (SDM-2023): Сб. тр. Всероссийской конф. с междунар. участ. 2023. С. 17–25.
2. Lepcha D.C., Goyal B., Dogra A., Goyal V. Image super-resolution: A comprehensive review, recent trends, challenges and applications // Information Fusion. 2023. vol. 91. pp. 230–260.
3. Goodfellow I., Pouget-Abadie J., Mirza M., Xu, B., Warde-Farley D., Ozair S., Courville A., Bengio Y. Generative adversarial nets. Advances in Neural Information Processing Systems (NIPS 2014). 2014. vol. 27. pp. 1–9.
4. Фаворская М.Н., Пахирка А.И. Улучшение разрешения снимков ДЗЗ на основе глубоких генеративно-состязательных сетей // Обработка пространственных данных в задачах мониторинга природных и антропогенных процессов (SDM-2023): Сб. тр. Всероссийской конф. с междунар. участ. 2023. С. 163–168.
5. Conde M.V., Choi U.J., Burchi M., Timofte R. Swin2SR: SwinV2 transformer for compressed image super-resolution and restoration // Computer Vision – ECCV 2022 Workshops. LNCS. Springer, Cham. 2023. vol. 13802. pp. 669–687.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献