Author:
Bujia Gaston,Sclar Melanie,Vita Sebastian,Solovey Guillermo,Kamienkowski Juan Esteban
Abstract
Finding objects is essential for almost any daily-life visual task. Saliency models have been useful to predict fixation locations in natural images during a free-exploring task. However, it is still challenging to predict the sequence of fixations during visual search. Bayesian observer models are particularly suited for this task because they represent visual search as an active sampling process. Nevertheless, how they adapt to natural images remains largely unexplored. Here, we propose a unified Bayesian model for visual search guided by saliency maps as prior information. We validated our model with a visual search experiment in natural scenes. We showed that, although state-of-the-art saliency models performed well in predicting the first two fixations in a visual search task ( 90% of the performance achieved by humans), their performance degraded to chance afterward. Therefore, saliency maps alone could model bottom-up first impressions but they were not enough to explain scanpaths when top-down task information was critical. In contrast, our model led to human-like performance and scanpaths as revealed by: first, the agreement between targets found by the model and the humans on a trial-by-trial basis; and second, the scanpath similarity between the model and the humans, that makes the behavior of the model indistinguishable from that of humans. Altogether, the combination of deep neural networks based saliency models for image processing and a Bayesian framework for scanpath integration probes to be a powerful and flexible approach to model human behavior in natural scenarios.
Subject
Cellular and Molecular Neuroscience,Cognitive Neuroscience,Developmental Neuroscience,Neuroscience (miscellaneous)
Reference56 articles.
1. How to look next? A data-driven approach for scanpath prediction;Boccignone,2019
2. Defending Yarbus: eye movements reveal observers' task;Borji;J. Vis,2014
3. Analysis of scores, datasets, and models in visual saliency prediction;Borji,2013
4. Retina-v1 model of detectability across the visual field;Bradley;J. Vis,2014
5. The psychophysics toolbox;Brainard;Spat. Vis,1997
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献