Exploring Focus and Depth-Induced Saliency Detection for Light Field
Author:
Zhang Yani1ORCID, Chen Fen12ORCID, Peng Zongju12ORCID, Zou Wenhui2ORCID, Zhang Changhe1
Affiliation:
1. School of Electrical and Electronic Engineering, Chongqing University of Technology, Chongqing 400054, China 2. Faculty of Information Science and Engineering, Ningbo University, No. 818, Ningbo 315211, China
Abstract
An abundance of features in the light field has been demonstrated to be useful for saliency detection in complex scenes. However, bottom-up saliency detection models are limited in their ability to explore light field features. In this paper, we propose a light field saliency detection method that focuses on depth-induced saliency, which can more deeply explore the interactions between different cues. First, we localize a rough saliency region based on the compactness of color and depth. Then, the relationships among depth, focus, and salient objects are carefully investigated, and the focus cue of the focal stack is used to highlight the foreground objects. Meanwhile, the depth cue is utilized to refine the coarse salient objects. Furthermore, considering the consistency of color smoothing and depth space, an optimization model referred to as color and depth-induced cellular automata is improved to increase the accuracy of saliency maps. Finally, to avoid interference of redundant information, the mean absolute error is chosen as the indicator of the filter to obtain the best results. The experimental results on three public light field datasets show that the proposed method performs favorably against the state-of-the-art conventional light field saliency detection approaches and even light field saliency detection approaches based on deep learning.
Funder
National Natural Science Foundation of China Natural Science Foundation of Chongqing Science and Technology Research Program of the Chongqing Municipal Education Commission Scientific Research Foundation of the Chongqing University of Technology
Subject
General Physics and Astronomy
Reference48 articles.
1. Jeon, H.G., Park, J., Choe, G., Park, J., Bok, Y., Tai, Y.W., and So Kweon, I. (2015, January 7–12). Accurate depth map estimation from a lenslet light field camera. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Boston, MA, USA. 2. Shin, C., Jeon, H.G., Yoon, Y., Kweon, I.S., and Kim, S.J. (2018, January 18–22). Epinet: A fully-convolutional neural network using epipolar geometry for depth from light field images. Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, Salt Lake City, UT, USA. 3. Jin, J., Hou, J., Yuan, H., and Kwong, S. (2020, January 7–12). Learning light field angular super-resolution via a geometry-aware network. Proceedings of the AAAI Conference on Artificial Intelligence, New York, NY, USA. 4. Wang, S., Zhou, T., Lu, Y., and Di, H. (March, January 22). Detail-preserving transformer for light field image super-resolution. Proceedings of the AAAI Conference on Artificial Intelligence, Virtual Event. 5. Segnet: A deep convolutional encoder-decoder architecture for image segmentation;Badrinarayanan;IEEE Trans. Pattern Anal. Mach. Intell.,2017
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|