Abstract
AbstractHow do we develop models of the world? Contextualising ambiguous information with previous experience allows us to form an enriched perception. Contextual information and prior knowledge facilitate perceptual processing, improving our recognition of even distorted or obstructed visual inputs. As a result, neuronal processing elicited by identical sensory inputs varies depending on the context in which we encounter those inputs. This modulation is in line with predictive processing accounts of vision which suggest that the brain uses internal models of the world to predict sensory inputs, with cortical feedback processing in sensory areas encoding beliefs about those inputs. As such, acquiring knowledge should enhance our internal models that we use to resolve sensory ambiguities, and feedback signals should encode more accurate estimates of sensory inputs. We used partially occluded Mooney images, ambiguous two-tone images which are difficult to recognise without prior knowledge of the image content, in behavioural and 3T fMRI experiments to measure if contextual feedback signals in early visual areas are modulated by learning. We show that perceptual priors add sensory detail to contextual feedback processing in early visual areas in response to subsequent presentations of previously ambiguous images.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献