Abstract
Compared with monocular images, scene discrepancies between the left- and right-view images impose additional challenges on visual quality predictions in binocular images. Herein, we propose a hierarchical feature fusion network (HFFNet) for blind binocular image quality prediction that handles scene discrepancies and uses multilevel fusion features from the left- and right-view images to reflect distortions in binocular images. Specifically, a feature extraction network based on MobileNetV2 is used to determine the feature layers from distorted binocular images; then, low-level binocular fusion features (or middle-level and high-level binocular fusion features) are obtained by fusing the left and right low-level monocular features (or middle-level and high-level monocular features) using the feature gate module; further, three feature enhancement modules are used to enrich the information of the extracted features at different levels. Finally, the total feature maps obtained from the high-, middle-, and low-level fusion features are applied to a three-input feature fusion module for feature merging. Thus, the proposed HFFNet provides better results, to the best of our knowledge, than existing methods on two benchmark datasets.
Funder
National Natural Science Foundation of China
China Postdoctoral Science Foundation
Subject
Atomic and Molecular Physics, and Optics,Engineering (miscellaneous),Electrical and Electronic Engineering