Abstract
AbstractOur surroundings continually propagate audiovisual (AV) signals, and attention assists us in making a clear sense of them at any given time. Visual and auditory streams may serve jointly or separately as basis for selection. Mechanisms of attentional weighing have been suggested to be based on uncertainty estimation processes at the neural level. We examine how do reduced temporal and spatial uncertainty conditions facilitate the cross-modal transfer of selective biases by visuospatial attention on auditory processing. Auditory encoding of random tone pips, devoid of spatial information, was investigated as pips were associated to spatially informative visual contrast reversals (‘flips’). In a two-interval forced choice task, participants compared between AV sequences of different temporal uncertainty, while sustaining endogenous visuospatial attention over fixed foregrounds (full, half or quarter of the disc). Neural encoding of pips was addressed via a temporal response function model (TRF) of the participants’ auditory electroencephalogram (EEG) timeseries. Cross-modal modulations from visual input on auditory processing was evidenced at low temporal uncertainty trials, for quarter but not half visual foreground sizes. Evidence of the relatively late effect (∼300 ms) was observed once pips were closely followed by flips, and transfer effect sizes were dependent on proximity to the visual target. The results suggest the presence of updating mechanisms over the ongoing neural representations of sounds, visually incorporating relevant spatial attributes for auditory stream segregation. The findings also illustrate the factoring of both bimodal and unimodal sources of uncertainty on the modulation of neural encoding dynamics by AV attention.
Publisher
Cold Spring Harbor Laboratory