Author:
DiMattina Christopher,Baker Curtis L.
Abstract
ABSTRACTSegmenting scenes into distinct surfaces is a basic visual perception task, and luminance differences between adjacent surfaces often provide an important segmentation cue. However, mean luminance differences between two surfaces may exist without any sharp change in albedo at their boundary, but rather from differences in the proportion of small light and dark areas within each surface, e.g. texture elements, which we refer to as a luminance texture boundary. Here we investigate the performance of human observers segmenting luminance texture boundaries. We demonstrate that a simple model involving a single stage of filtering cannot explain observer performance, unless it incorporates contrast normalization. Performing additional experiments in which observers segment luminance texture boundaries while ignoring super-imposed luminance step boundaries, we demonstrate that the one-stage model, even with contrast normalization, cannot explain performance. We then present a Filter-Rectify-Filter (FRF) model positing two cascaded stages of filtering, which fits our data well, and explains observers’ ability to segment luminance texture boundary stimuli in the presence of interfering luminance step boundaries. We propose that such computations may be useful for boundary segmentation in natural scenes, where shadows often give rise to luminance step edges which do not correspond to surface boundaries.
Publisher
Cold Spring Harbor Laboratory
Reference50 articles.
1. Double dissociation between first- and second-order processing
2. Bishop, C. M. (2006). Pattern recognition and machine learning. Springer.
3. Brodatz, P. (1966). Textures: A photographic album for artists and designers. Dover Publications.
4. The Psychophysics Toolbox
5. Color improves edge classification in human vision;PLoS Computational Biology,2019