VQ-InfraTrans: A Unified Framework for RGB-IR Translation with Hybrid Transformer
-
Published:2023-12-07
Issue:24
Volume:15
Page:5661
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Sun Qiyang1ORCID, Wang Xia1, Yan Changda1ORCID, Zhang Xin1
Affiliation:
1. School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Street, Beijing 100081, China
Abstract
Infrared (IR) images containing rich spectral information are essential in many fields. Most RGB-IR transfer work currently relies on conditional generative models to learn and train IR images for specific devices and scenes. However, these models only establish an empirical mapping relationship between RGB and IR images in a single dataset, which cannot achieve the multi-scene and multi-band (0.7–3 μm and 8–15 μm) transfer task. To address this challenge, we propose VQ-InfraTrans, a comprehensive framework for transferring images from the visible spectrum to the infrared spectrum. Our framework incorporates a multi-mode approach to RGB-IR image transferring, encompassing both unconditional and conditional transfers, achieving diverse and flexible image transformations. Instead of training individual models for each specific condition or dataset, we propose a two-stage transfer framework that integrates diverse requirements into a unified model that utilizes a composite encoder–decoder based on VQ-GAN, and a multi-path transformer to translate multi-modal images from RGB to infrared. To address the issue of significant errors in transferring specific targets due to their radiance, we have developed a hybrid editing module to precisely map spectral transfer information for specific local targets. The qualitative and quantitative comparisons conducted in this work reveal substantial enhancements compared to prior algorithms, as the objective evaluation metric SSIM (structural similarity index) was improved by 2.24% and the PSNR (peak signal-to-noise ratio) was improved by 2.71%.
Subject
General Earth and Planetary Sciences
Reference63 articles.
1. Imagenet large scale visual recognition challenge;Russakovsky;Int. J. Comput. Vis.,2015 2. The pascal visual object classes challenge: A retrospective;Everingham;Int. J. Comput. Vis.,2015 3. Lin, T.Y., Maire, M., Belongie, S., Hays, J., Perona, P., Ramanan, D., Dollár, P., and Zitnick, C.L. (2014). Proceedings, Part V 13, Proceedings of the Computer Vision—ECCV 2014: 13th European Conference, Zurich, Switzerland, 6–12 September 2014, Springer. 4. Chang, Y., and Luo, B. (2019). Bidirectional convolutional LSTM neural network for remote sensing image super-resolution. Remote Sens., 11. 5. Gu, J., Sun, X., Zhang, Y., Fu, K., and Wang, L. (2019). Deep residual squeeze and excitation network for remote sensing image super-resolution. Remote Sens., 11.
|
|