Affiliation:
1. Graduate School of Engineering Kobe University 1‐1 Rokkodai, Nada Kobe Hyogo 657‐8501 Japan
Abstract
This paper proposes normalizing flow‐based image super‐resolution techniques using attention modules. In the proposed method, the features of the low‐resolution images are extracted using a Swin Transformer. Furthermore, multi‐head attention in the flow layers makes effective use of the feature maps. This architecture enables the efficient injection of low‐resolution image features extracted by the transformer into the flow layer. Experimental results at x4 magnifications showed that the proposed method achieved state‐of‐the‐art performance for quantitative metrics and visual quality among single‐loss architectures. © 2024 Institute of Electrical Engineers of Japan and Wiley Periodicals LLC.