Inpainting radar missing data regions with deep learning
-
Published:2021-12-09
Issue:12
Volume:14
Page:7729-7747
-
ISSN:1867-8548
-
Container-title:Atmospheric Measurement Techniques
-
language:en
-
Short-container-title:Atmos. Meas. Tech.
Author:
Geiss Andrew, Hardin Joseph C.ORCID
Abstract
Abstract. Missing and low-quality data regions are a frequent problem for weather radars. They stem from a variety of sources: beam blockage, instrument failure, near-ground blind zones, and many others. Filling in missing data regions is often useful for estimating local atmospheric properties and the application of high-level data processing schemes without the need for preprocessing and error-handling steps – feature detection and tracking, for instance. Interpolation schemes are typically used for this task, though they tend to produce unrealistically spatially smoothed results that are not representative of the atmospheric turbulence and variability that are usually resolved by weather radars. Recently, generative adversarial networks (GANs) have achieved impressive results in the area of photo inpainting. Here, they are demonstrated as a tool for infilling radar missing data regions. These neural networks are capable of extending large-scale cloud and precipitation features that border missing data regions into the regions while hallucinating plausible small-scale variability. In other words, they can inpaint missing data with accurate large-scale features and plausible local small-scale features. This method is demonstrated on a scanning C-band and vertically pointing Ka-band radar that were deployed as part of the Cloud Aerosol and Complex Terrain Interactions (CACTI) field campaign. Three missing data scenarios are explored: infilling low-level blind zones and short outage periods for the Ka-band radar and infilling beam blockage areas for the C-band radar. Two deep-learning-based approaches are tested, a convolutional neural network (CNN) and a GAN that optimize pixel-level error or combined pixel-level error and adversarial loss respectively. Both deep-learning approaches significantly outperform traditional inpainting schemes under several pixel-level and perceptual quality metrics.
Funder
U.S. Department of Energy
Publisher
Copernicus GmbH
Subject
Atmospheric Science
Reference46 articles.
1. Agrawal, S., Barrington, L., Bromberg, C., Burge, J., Gazen, C., and Hickey,
J.: Machine learning for precipitation nowcasting from radar images, arXiv [preprint], arXiv:1912.12132, 11 December 2019. a 2. Arjovsky, M., Chintala, S., and Bottou, L.: Wasserstein generative adversarial networks, in: International conference on machine learning, PMLR, 70, 214–223, available at: https://proceedings.mlr.press/v70/arjovsky17a.html (last access: 22 March 2020), 2017. a 3. Bertalmio, M., Sapiro, G., Caselles, V., and Ballester, C.: Image Inpainting,
in: Proceedings of the 27th annual conference on Computer graphics and
interactive techniques, SIGGRAPH '00, Addison-Wesley Publishing Co., USA, 417–424, https://doi.org/10.1145/344779.344972, 2000. a 4. Bertalmio, M., Bertozzi, A. L., and Sapiro, G.: Navier-stokes, fluid
dynamics, and image and video inpainting, in: Proceedings of the 2001 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, Kauai, HI, USA, 8–14 December 2001, IEEE CVPR, 1, https://doi.org/10.1109/CVPR.2001.990497, 2001. a 5. Bugeau, A., Bertalmío, M., Caselles, V., and Sapiro, G.: A Comprehensive
Framework for Image Inpainting, IEEE T. Image Process., 19,
2634–2645, https://doi.org/10.1109/TIP.2010.2049240, 2010. a
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|