Abstract
AbstractSustained attention is essential for daily life and can be directed to information from different perceptual modalities including audition and vision. Recently, cognitive neuroscience has aimed to identify neural predictors of behavior that generalize across datasets. Prior work has shown strong generalization of models trained to predict individual differences in sustained attention performance from patterns of fMRI functional connectivity. However, it is an open question whether predictions of sustained attention are specific to the perceptual modality in which they are trained. In the current study we test whether connectome-based models predict performance on attention tasks performed in different modalities. We show first that a predefined network trained to predict adults’visualsustained attention performance generalizes to predictauditorysustained attention performance in three independent datasets (N1=29, N2=62, N3=17; both sexes). Next, we train new network models to predict performance on visual and auditory attention tasks separately. We find that functional networks are largely modality-general, with both model-unique and shared model features predicting sustained attention performance in independent datasets regardless of task modality. Results support the supposition that visual and auditory sustained attention rely on shared neural mechanisms and demonstrate robust generalizability of whole-brain functional network models of sustained attention.Significance statementWhile previous work has demonstrated external validity of functional connectivity-based networks for the prediction of cognitive and attentional performance, testing generalization across visual and auditory perceptual modalities has been limited. The current study demonstrates robust prediction of sustained attention performance, regardless of perceptual modality models are trained or tested in. Results demonstrate that connectivity-based models may generalize broadly capturing variance in sustained attention performance which is agnostic to the perceptual modality of model training.
Publisher
Cold Spring Harbor Laboratory