Affiliation:
1. Department of Electrical Communication Engineering
2. Centre for Neuroscience Indian Institute of Science
Abstract
Many visual tasks involve looking for specific object features. But we also often perform generic tasks where we look for specific property, such as finding an odd item, deciding if two items are same, or if an object has symmetry. How do we solve such tasks? Using simple neural rules, we show that displays with repeating elements can be distinguished from heterogeneous displays using a property we define as visual homogeneity. In behavior, visual homogeneity predicted response times on visual search and symmetry tasks. Brain imaging during these tasks revealed that visual homogeneity in both tasks is localized to a region in the object-selective cortex. Thus, a novel image property, visual homogeneity, is encoded in a localized brain region, to solve generic visual tasks.Most visual tasks involve looking for specific features, like finding a face in a crowd. But we often also perform generic tasks where we look for a particular image property – such as finding an odd item, deciding if two items are same, or judging if an object is symmetric. Precisely how we solve such disparate tasks is unclear. Here, we show that these tasks can be solved using a simple property we define as visual homogeneity. In behavior, visual homogeneity predicted response times on visual search and symmetry tasks. In brain imaging, it was localized to a region near the object-selective cortex. Thus, a novel image property, visual homogeneity, is computed by the brain to solve generic visual tasks.
Publisher
eLife Sciences Publications, Ltd