Abstract
Human verbal communication requires a rapid interplay between speech planning, production, and comprehension. These processes are subserved by local and long-range neural dynamics across widely distributed brain areas. How linguistic information is precisely represented during natural conversation or what shared neural processes are involved, however, remain largely unknown. Here we used intracranial neural recordings in participants engaged in free dialogue and employed deep learning natural language processing models to find a striking similarity not only between neural-to-artificial network activities but also between how linguistic information is encoded in brain during production and comprehension. Collectively, neural activity patterns that encoded linguistic information were closely aligned to those reflecting speaker-listener transitions and were reduced after word utterance or when no conversation was held. They were also observed across distinct mesoscopic areas and frequency bands during production and comprehension, suggesting that these signals reflected the hierarchically structured information being conveyed during dialogue. Together, these findings suggest that linguistic information is encoded in the brain through similar neural representations during both speaking and listening, and start to reveal the distributed neural dynamics subserving human communication.
Publisher
Cold Spring Harbor Laboratory
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献