Abstract
AbstractArtificial neural networks are emerging as key tools to model brain processes associated with sound in auditory neuroscience. Most modelling works fit a single model with brain activity averaged across a group of subjects, ignoring individual-specific features of brain organisation. We demonstrate here the creation of personalised auditory artificial neural models directly aligned with individual brain activity. We used a deep phenotyping dataset from the Courtois neuronal modelling project, where six subjects watched four seasons (36 hours) of the Friends TV series in functional magnetic resonance imaging. We fine-tuned SoundNet, an established deep artificial neural network, to achieve substantial improvement in predicting individual brain activity, including but not limited to the auditory and visual cortices. Performance gains on the HEAR evaluation benchmark, a large collection of downstream AI audio tasks, were modest but consistent, demonstrating that brain alignment leads to more generalizable representations. The fine-tuned models were also subject-specific, as models trained on a specific subject outperformed models trained from other subjects’ data. The resulting individual brain models thus provide a new tool for exploring the idiosyncratic representations of auditory signals within the individual human brain.
Publisher
Cold Spring Harbor Laboratory