Abstract
SummaryA central challenge in neuroscience is decoding brain activity to uncover the mental content comprising multiple components and their interactions. Despite progress in decoding language-related information from human brain activity13−4, generating comprehensive descriptions of intricate and structured mental content linked to visual semantics remains elusive5−12. Here, we present a method that progressively generates descriptive text mirroring brain representations via semantic features computed by a deep language model. We constructed linear decoding models to decode brain activity, measured by functional magnetic resonance imaging (fMRI) while subjects viewed videos, into semantic features of corresponding video captions. We then iteratively optimized candidate descriptions by aligning their semantic features with the brain-decoded features through word replacement and interpolation. This process resulted in the evolution of increasingly well-structured descriptions that faithfully captured the viewed content. Remarkably, comprehensible descriptions were generated even when excluding the fronto-temporal language areas from the analysis, highlighting explicit representations of structured semantic information outside the typical language network. Additionally, our method generalized to generate descriptions of imagined content, providing a means to interpret intricate mental content by translating brain signals into linguistic descriptions. These findings pave the way for non-verbal thought-based brain-to-text communication, potentially aiding individuals facing difficulties in language expression.
Publisher
Cold Spring Harbor Laboratory