Abstract
SummaryDeep learning advances have revolutionized computational modeling approaches in neuroscience. However, their black-box nature makes it challenging to use deep learning models to discover new insights about brain function. Focusing on human language processing, we propose a new framework to improve the quality and interpretability of the inferences we make from deep learning-based models. First, we add interpretable components to a deep language model and use it to build a predictive encoding model. Then, we use the model’s predictive abilities to simulate brain responses to controlled stimuli from published experiments. We find that our model, based on a multi-timescale recurrent neural network, captures many previously reported temporal context effects in human cortex. Its failure to capture other effects also highlights important gaps in current language models. Finally, we use this new framework to generate model-based evidence that supports the proposal that different linguistic features are represented at different timescales across cortex.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献