Abstract
AbstractSemantic information is important in eye-movement control. An important semantic influence on gaze guidance relates to object-scene relationships: objects that are semantically inconsistent with the scene attract more fixations than consistent objects. One interpretation of this effect is that fixations are driven towards inconsistent objects because they are semantically more informative. We tested this explanation using contextualized meaning maps, a method that is based on crowd-sourced ratings to quantify the spatial distribution of context-sensitive ‘meaning’ in images. In Experiment 1, we compared gaze data and contextualized meaning maps for images, in which objects-scene consistency was manipulated. Observers fixated more on inconsistent vs. consistent objects. However, contextualized meaning maps did not assigned higher meaning to image regions that contained semantic inconsistencies. In Experiment 2, a large number of raters evaluated the meaningfulness of a set of carefully selected image-regions. The results suggest that the same scene locations were experienced as slightly less meaningful when they contained inconsistent compared to consistent objects. In summary, we demonstrated that – in the context of our rating task – semantically inconsistent objects are experienced as less meaningful than their consistent counterparts, and that contextualized meaning maps do not capture prototypical influences of image meaning on gaze guidance.
Publisher
Cold Spring Harbor Laboratory
Reference73 articles.
1. A Model of the Superior Colliculus Predicts Fixation Locations during Scene Viewing and Visual Search
2. Attali, D. , & Baker, C. (2019). ggExtra: Add Marginal Histograms to “ggplot2”, and More “ggplot2” Enhancements (version 0.9). https://cran.r-project.org/package=ggExtra
3. Bayat, A. , Nand, A. K. , Koh, D. H. , Pereira, M. , & Pomplun, M. (2018). Scene grammar in human and machine recognition of objects and scenes. IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops, 2018-June(June), 2073–2080. https://doi.org/10.1109/CVPRW.2018.00268
4. Modeling bottom-up and top-down attention with a neurodynamic model of V1;Neurocomputing,2020
5. Scene perception: Detecting and judging objects undergoing relational violations