Abstract
AbstractVisual search is one of the most ecologically important perceptual task domains. One research tradition has studied visual search using simple, parametric stimuli and a signal detection theory or Bayesian modeling framework. However, this tradition has mostly focused on homogeneous distractors (identical to each other), which are not very realistic. In a different tradition, Duncan and Humphreys (1989) conducted a landmark study on visual search with heterogeneous distractors. However, they used complex stimuli, making modeling and dissociation of component processes difficult. Here, we attempt to unify these research traditions by systematically examining visual search with heterogeneous distractors using simple, parametric stimuli and Bayesian modeling. Our experiment varied multiple factors that could influence performance: set size, task (N-AFC localization vs detection), whether the target was revealed before or after the search array (perception versus memory), and stimulus spacing. We found that performance robustly decreased with increasing set size. When examining within-trial summary statistics, we found that the minimum target-to-distractor feature difference was a stronger predictor of behavior than the mean target-to-distractor difference and than distractor variance. To obtain process-level understanding, we formulated a Bayesian optimal-observer model. This model accounted for all summary statistics, including when fitted jointly to localization and detection. We replicated these results in a separate experiment with reduced stimulus spacing. Together, our results represent a critique of Duncan and Humphrey’s descriptive approach, bring visual search with heterogeneous distractors firmly within the reach of quantitative process models, and affirm the “unreasonable effectiveness” of Bayesian models in explaining visual search.
Publisher
Cold Spring Harbor Laboratory
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献