Abstract
The first 40 years of research on the neurobiology of sign languages (1960–2000) established that the same key left hemisphere brain regions support both signed and spoken languages, based primarily on evidence from signers with brain injury and at the end of the 20th century, based on evidence from emerging functional neuroimaging technologies (positron emission tomography and fMRI). Building on this earlier work, this review focuses on what we have learned about the neurobiology of sign languages in the last 15–20 years, what controversies remain unresolved, and directions for future research. Production and comprehension processes are addressed separately in order to capture whether and how output and input differences between sign and speech impact the neural substrates supporting language. In addition, the review includes aspects of language that are unique to sign languages, such as pervasive lexical iconicity, fingerspelling, linguistic facial expressions, and depictive classifier constructions. Summary sketches of the neural networks supporting sign language production and comprehension are provided with the hope that these will inspire future research as we begin to develop a more complete neurobiological model of sign language processing.
Funder
National Institutes of Health
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. Language Learning and Social Interaction;Fundamentals of Developmental Cognitive Neuroscience;2024-02-01
2. Book review;Journal of Pragmatics;2023-11
3. Neural mechanisms of event visibility in sign languages;Language, Cognition and Neuroscience;2023-06-29
4. Ten Things You Should Know About Sign Languages;Current Directions in Psychological Science;2023-05-15
5. Multi-cue temporal modeling for skeleton-based sign language recognition;Frontiers in Neuroscience;2023-04-05