Affiliation:
1. Kyushu University, Japan
Abstract
A unified decomposition-and-integration-based framework is presented herein for the visual saliency estimation of omnidirectional high dynamic range (HDR) images, which allows straightforward reuse of existing saliency estimation method for typical images with narrow field-of-view and low dynamic range (LDR). First, the proposed method decomposes a given omnidirectional HDR image into multiple partially overlapping LDR images with quasi-uniform spatial resolution and without polar singularities, both spatially and in intensity using a spherical overset grid and a tone-mapping-based synthesis of imaginary multiexposure images. For each decomposed image, a standard saliency estimation method is then applied for typical images. Finally, the saliency map of each decomposed image is optimally integrated from the coordinate system of the overset grid and LDR back to the representation of the coordinate system and HDR of the original image. The proposed method is applied to actual omnidirectional HDR images and its effectiveness is demonstrated.
Reference43 articles.
1. Abreu, A. D., Ozcinar, C., & Smolic, A. (2017). Look around you: Saliency maps for omnidirectional images in VR applications. In Proceedings of International Conference on Quality of Multimedia Experience (pp.1-6). Academic Press.
2. Frequency-tuned salient region detection.;R.Achanta;Proceedings of IEEE Conference on Computer Vision and Pattern Recognition,2009
3. Seam carving for content-aware image resizing
4. A learning-based visual saliency fusion model for High Dynamic Range video (LBVS-HDR)
5. Visual Attention on the Sphere