Author:
Caucheteux Charlotte,Gramfort Alexandre,King Jean-Rémi
Abstract
Language transformers, like GPT-2, have demonstrated remarkable abilities to process text, and now constitute the backbone of deep translation, summarization and dialogue algorithms. However, whether these models encode information that relates to human comprehension remains controversial. Here, we show that the representations of GPT-2 not only map onto the brain responses to spoken stories, but also predict the extent to which subjects understand narratives. To this end, we analyze 101 subjects recorded with functional Magnetic Resonance Imaging while listening to 70 min of short stories. We then fit a linear model to predict brain activity from GPT-2’s activations, and correlate this mapping with subjects’ comprehension scores as assessed for each story. The results show that GPT-2’s brain predictions significantly correlate with semantic comprehension. These effects are bilaterally distributed in the language network and peak with a correlation of R=0.50 in the angular gyrus. Overall, this study paves the way to model narrative comprehension in the brain through the lens of modern language algorithms.
Publisher
Cold Spring Harbor Laboratory
Reference22 articles.
1. Alec Radford , Jeffrey Wu , Rewon Child , David Luan , Dario Amodei , and Ilya Sutskever . Language Models are Unsupervised Multitask Learners. page 24, 2018.
2. Martin Schrimpf , Idan Blank , Greta Tuckute , Carina Kauf , Eghbal A. Hosseini , Nancy Kanwisher , Joshua Tenenbaum , and Evelina Fedorenko . Artificial Neural Networks Accurately Predict Language Processing in the Brain. bioRxiv, page 2020.06.26.174482, June 2020.. Publisher: Cold Spring Harbor Laboratory Section: New Results.
3. Charlotte Caucheteux and Jean-Rémi King . Language processing in brains and deep neural networks: computational convergence and its limits. bioRxiv, page 2020.07.03.186288, July 2020.. Publisher: Cold Spring Harbor Laboratory Section: New Results.
4. Ariel Goldstein , Zaid Zada , Eliav Buchnik , Mariano Schain , Amy Price , Bobbi Aubrey , Samuel A. Nastase , Amir Feder , Dotan Emanuel , Alon Cohen , Aren Jansen , Harshvardhan Gazula , Gina Choe , Aditi Rao , Catherine Kim , Colton Casto , Fanda Lora , Adeen Flinker , Sasha Devore , Werner Doyle , Patricia Dugan , Daniel Friedman , Avinatan Hassidim , Michael Brenner , Yossi Matias , Ken A. Norman , Orrin Devinsky , and Uri Hasson . Thinking ahead: prediction in context as a keystone of language in humans and machines. bioRxiv, page 2020.12.02.403477, January 2021.. Publisher: Cold Spring Harbor Laboratory Section: New Results.
Cited by
19 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献