Reconstruction of the Chemical Gas Concentration Distribution Using Partial Convolution-Based Image Inpainting
Author:
Kang Minjae1ORCID, Son Jungjae1ORCID, Lee Byungheon1, Nam Hyunwoo1ORCID
Affiliation:
1. Chem-Bio Technology Center, Agency for Defense Development, Daejeon 34186, Republic of Korea
Abstract
An interpolation method, which estimates unknown values with constrained information, is based on mathematical calculations. In this study, we addressed interpolation from an image-based perspective and expanded the use of image inpainting to estimate values at unknown points. When chemical gas is dispersed through a chemical attack or terrorism, it is possible to determine the concentration of the gas at each location by utilizing the deployed sensors. By interpolating the concentrations, we can obtain the contours of gas concentration. Accurately distinguishing the contours of a contaminated region from a map enables the optimal response to minimize damage. However, areas with an insufficient number of sensors have less accurate contours than other areas. In order to achieve more accurate contour data, an image inpainting-based method is proposed to enhance reliability by erasing and reconstructing low-accuracy areas in the contour. Partial convolution is used as the machine learning approach for image-inpainting, with the modified loss function for optimization. In order to train the model, we developed a gas diffusion simulation model and generated a gas concentration contour dataset comprising 100,000 contour images. The results of the model were compared to those of Kriging interpolation, one of the conventional spatial interpolation methods, finally demonstrating 13.21% higher accuracy. This suggests that interpolation from an image-based perspective can achieve higher accuracy than numerical interpolation on well-trained data. The proposed method was validated using gas concentration contour data from the verified gas dispersion modeling software Nuclear Biological Chemical Reporting And Modeling System (NBC_RAMS), which was developed by the Agency for Defense Development, South Korea.
Funder
Agency for Defense Development
Reference27 articles.
1. Aggregated contextual transformations for high-resolution image inpainting;Zeng;IEEE Trans. Vis. Comput. Graph.,2022 2. Lugmayr, A., Danelljan, M., Romero, A., Yu, F., Timofte, R., and Van Gool, L. (2022, January 18–24). Repaint: Inpainting using denoising diffusion probabilistic models. Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, New Orleans, LA, USA. 3. Wan, Z., Zhang, J., Chen, D., and Liao, J. (2021, January 19–25). High-fidelity pluralistic image completion with transformers. Proceedings of the IEEE/CVF International Conference on Computer Vision, Montreal, QC, Canada. 4. Goodfellow, I., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A., and Bengio, Y. (2014, January 8–13). Generative adversarial nets. Proceedings of the Annual Conference on Neural Information Processing Systems 2014, Montreal, QC, Canada. 5. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., and Polosukhin, I. (2017, January 4–9). Attention is all you need. Proceedings of the Annual Conference on Neural Information Processing Systems 2017, Long Beach, CA, USA.
|
|