Method for Landslide Area Detection Based on EfficientNetV2 with Optical Image Converted from SAR Image Using pix2pixHD with Spatial Attention Mechanism in Loss Function
-
Published:2024-08-28
Issue:9
Volume:15
Page:524
-
ISSN:2078-2489
-
Container-title:Information
-
language:en
-
Short-container-title:Information
Author:
Arai Kohei1, Nakaoka Yushin2, Okumura Hiroshi1
Affiliation:
1. Information Science Department, Science and Engineering Faculty, Saga University, Saga 840-8502, Japan 2. Graduate School of Science and Engineering Faculty, Saga University, Saga 840-8502, Japan
Abstract
A method for landslide area detection based on EfficientNetV2 with optical image converted from SAR image using pix2pixHD with a spatial attention mechanism in the loss function is proposed. Meteorological landslides such as landslides after heavy rains occur regardless of day or night and weather conditions. Meteorological landslides such as landslides are easier to visually judge using optical images than SAR images, but optical images cannot be observed at night, in the rain, or on cloudy days. Therefore, we devised a method to convert SAR images, which allow all-weather observation regardless of day or night, into optical images using pix2pixHD, and to learn about landslide areas using the converted optical images to build a trained model. We used SAR and optical images derived from Sentinel-1 and -2, which captured landslides caused by the earthquake on 14 April 2016, as training data, and constructed a learning model that classifies landslide areas using EfficientNetV2. We evaluated the superiority of the proposed method by comparing it with a learning model that uses only SAR images. As a result, it was confirmed that the F1-score and AUC were 0.3396 and 0.2697, respectively, when using only SAR images, but were improved by 1.52 to 1.84 times to 0.6250 and 0.4109, respectively, when using the proposed method.
Reference29 articles.
1. Arai, K. (2004). Self-Study Remote Sensing, Morikita Publishing. 2. Isola, P., Zhu, J.Y., Zhou, T., and Efros, A.A. (2017, January 21–26). Image-to-Image Translation with Conditional Adversarial Networks. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Honolulu, HI, USA. 3. Wang, T.-C., Liu, M.Y., Zhu, J.Y., Tao, A., Kautz, J., and Catanzaro, B. (2018, January 18–23). High-Resolution Image Synthesis and Semantic Manipulation with Conditional GANs. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Salt Lake City, UT, USA. 4. Kim, J., Kim, M., Kang, H., and Lee, K. (2020, January 30). U-GAT-IT: Unsupervised Generative Attentional Networks with Adaptive Layer-Instance Normalization for Image-to-Image Translation. Proceedings of the International Conference on Learning Representations (ICLR), Addis Ababa, Ethiopia. 5. STIT++: Towards Robust Structure-Preserving Image-to-Image Translation;Wang;IEEE Trans. Pattern Anal. Mach. Intell.,2023
|
|