Affiliation:
1. Department of Psychology Cornell University
2. Center for Humanities Computing Aarhus University
3. Interacting Minds Centre Aarhus University
4. School of Communication and Culture Aarhus University
5. Haskins Laboratories
Abstract
AbstractTo what degree can language be acquired from linguistic input alone? This question has vexed scholars for millennia and is still a major focus of debate in the cognitive science of language. The complexity of human language has hampered progress because studies of language–especially those involving computational modeling–have only been able to deal with small fragments of our linguistic skills. We suggest that the most recent generation of Large Language Models (LLMs) might finally provide the computational tools to determine empirically how much of the human language ability can be acquired from linguistic experience. LLMs are sophisticated deep learning architectures trained on vast amounts of natural language data, enabling them to perform an impressive range of linguistic tasks. We argue that, despite their clear semantic and pragmatic limitations, LLMs have already demonstrated that human‐like grammatical language can be acquired without the need for a built‐in grammar. Thus, while there is still much to learn about how humans acquire and use language, LLMs provide full‐fledged computational models for cognitive scientists to empirically evaluate just how far statistical learning might take us in explaining the full complexity of human language.
Subject
Artificial Intelligence,Cognitive Neuroscience,Experimental and Cognitive Psychology
Reference43 articles.
1. Arehalli S. Dillon B. &Linzen T.(2022).Syntactic surprisal from neural models predicts but underestimates human processing difficulty from syntactic ambiguities.https://doi.org/10.48550/arxiv.2210.12187
2. On the dangers of stochastic parrots: Can language models be too big?;Bender E. M.;Proceedings of FAccT 2021,2021
3. BigScience Workshop. (2022).BLOOM. Hugging Face. Available at:https://huggingface.co/bigscience/bloom
4. Language models are few‐shot learners;Brown T.;Advances in Neural Information Processing Systems,2020
5. Verbal behavior
Cited by
31 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献