Affiliation:
1. School of Software Engineering, Xi’an Jiaotong University, Xi’an 710049, China
2. Key Laboratory of Child Development and Learning Science, Southeast University, Nanjing 211189, China
3. School of Human Settlements and Civil Engineering, Xi’an Jiaotong University, Xi’an 710049, China
Abstract
Hyperspectral image (HSI) super-resolution is a practical and challenging task as it requires the reconstruction of a large number of spectral bands. Achieving excellent reconstruction results can greatly benefit subsequent downstream tasks. The current mainstream hyperspectral super-resolution methods mainly utilize 3D convolutional neural networks (3D CNN) for design. However, the commonly used small kernel size in 3D CNN limits the model’s receptive field, preventing it from considering a wider range of contextual information. Though the receptive field could be expanded by enlarging the kernel size, it results in a dramatic increase in model parameters. Furthermore, the popular vision transformers designed for natural images are not suitable for processing HSI. This is because HSI exhibits sparsity in the spatial domain, which can lead to significant computational resource waste when using self-attention. In this paper, we design a hybrid architecture called HyFormer, which combines the strengths of CNN and transformer for hyperspectral super-resolution. The transformer branch enables intra-spectra interaction to capture fine-grained contextual details at each specific wavelength. Meanwhile, the CNN branch facilitates efficient inter-spectra feature extraction among different wavelengths while maintaining a large receptive field. Specifically, in the transformer branch, we propose a novel Grouping-Aggregation transformer (GAT), comprising grouping self-attention (GSA) and aggregation self-attention (ASA). The GSA is employed to extract diverse fine-grained features of targets, while the ASA facilitates interaction among heterogeneous textures allocated to different channels. In the CNN branch, we propose a Wide-Spanning Separable 3D Attention (WSSA) to enlarge the receptive field while keeping a low parameter number. Building upon WSSA, we construct a wide-spanning CNN module to efficiently extract inter-spectra features. Extensive experiments demonstrate the superior performance of our HyFormer.
Funder
National Natural Science Foundation of China
Key Research and Development Program of Shaanxi
Fundamental Research Funds for the Central Universities
Subject
General Earth and Planetary Sciences