Joint Luminance-Saliency Prior and Attention for Underwater Image Quality Assessment
-
Published:2024-08-17
Issue:16
Volume:16
Page:3021
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Lin Zhiqiang1, He Zhouyan1ORCID, Jin Chongchong1, Luo Ting1ORCID, Chen Yeyao2
Affiliation:
1. College of Science and Technology, Ningbo University, Ningbo 315212, China 2. Faculty of Information Science and Engineering, Ningbo University, Ningbo 315212, China
Abstract
Underwater images, as a crucial medium for storing ocean information in underwater sensors, play a vital role in various underwater tasks. However, they are prone to distortion due to the imaging environment, which leads to a decline in visual quality, which is an urgent issue for various marine vision systems to address. Therefore, it is necessary to develop underwater image enhancement (UIE) and corresponding quality assessment methods. At present, most underwater image quality assessment (UIQA) methods primarily rely on extracting handcrafted features that characterize degradation attributes, which struggle to measure complex mixed distortions and often exhibit discrepancies with human visual perception in practical applications. Furthermore, current UIQA methods lack the consideration of the perception perspective of enhanced effects. To this end, this paper employs luminance and saliency priors as critical visual information for the first time to measure the enhancement effect of global and local quality achieved by the UIE algorithms, named JLSAU. The proposed JLSAU is built upon an overall pyramid-structured backbone, supplemented by the Luminance Feature Extraction Module (LFEM) and Saliency Weight Learning Module (SWLM), which aim at obtaining perception features with luminance and saliency priors at multiple scales. The supplement of luminance priors aims to perceive visually sensitive global distortion of luminance, including histogram statistical features and grayscale features with positional information. The supplement of saliency priors aims to perceive visual information that reflects local quality variation both in spatial and channel domains. Finally, to effectively model the relationship among different levels of visual information contained in the multi-scale features, the Attention Feature Fusion Module (AFFM) is proposed. Experimental results on the public UIQE and UWIQA datasets demonstrate that the proposed JLSAU outperforms existing state-of-the-art UIQA methods.
Funder
National Natural Science Foundation of China Natural Science Foundation of Zhejiang Province Zhejiang Provincial Postdoctoral 604 Research Excellence Foundation
Reference63 articles.
1. Sun, K., and Tian, Y. (2023). Dbfnet: A dual-branch fusion network for underwater image enhancement. Remote Sens., 15. 2. Underwater image processing: State of the art of restoration and image enhancement methods;Schettini;EURASIP J. Adv. Signal Process.,2010 3. Two-Stage Progressive Underwater Image Enhancement;Wu;IEEE Trans. Instrum. Meas.,2024 4. Berga, D., Gallés, P., Takáts, K., Mohedano, E., Riordan-Chen, L., Garcia-Moll, C., Vilaseca, D., and Marín, J. (2023). QMRNet: Quality Metric Regression for EO Image Quality Assessment and Super-Resolution. Remote Sens., 15. 5. Hao, X., Li, X., Wu, J., Wei, B., Song, Y., and Li, B. (2024). A No-Reference Quality Assessment Method for Hyperspectral Sharpened Images via Benford’s Law. Remote Sens., 16.
|
|