An Investigation of Applying Large Language Models to Spoken Language Learning
-
Published:2023-12-26
Issue:1
Volume:14
Page:224
-
ISSN:2076-3417
-
Container-title:Applied Sciences
-
language:en
-
Short-container-title:Applied Sciences
Author:
Gao Yingming1ORCID, Nuchged Baorian2, Li Ya1, Peng Linkai3
Affiliation:
1. School of Artificial Intelligence, Beijing University of Posts and Telecommunications, Beijing 100876, China 2. Department of Linguistics, The University of Texas at Austin, Austin, TX 78712, USA 3. NetEase Youdao, Beijing 100193, China
Abstract
People have long desired intelligent conversational systems that can provide assistance in practical scenarios. The latest advancements in large language models (LLMs) are making significant strides toward turning this aspiration into a tangible reality. LLMs are believed to hold the most potential and value in education, especially in the creation of AI-driven virtual teachers that facilitate language learning. This study focuses on assessing the effectiveness of LLMs within the educational domain, specifically in the areas of spoken language learning, which encompass phonetics, phonology, and second language acquisition. To this end, we first introduced a new multiple-choice question dataset to evaluate the effectiveness of LLMs in the aforementioned scenarios, including the understanding and application of spoken language knowledge. Moreover, we investigated the influence of various prompting techniques such as zero- and few-shot methods (prepending the question with question-answer exemplars), chain-of-thought (CoT) prompting, in-domain exemplars, and external tools. We conducted a comprehensive evaluation of popular LLMs (20 distinct models) using these methods. The experimental results showed that the task of extracting conceptual knowledge posed few challenges for these LLMs, whereas the task of application questions was relatively difficult. In addition, some widely proven effective prompting methods combined with domain-specific examples resulted in significant performance improvements compared to the zero-shot baselines. Additionally, some other preliminary experiments also demonstrated the strengths and weaknesses of different LLMs. The findings of this study can shed light on the application of LLMs to spoken language learning.
Funder
Key Project of the National Language Commission Fundamental Research Funds for the Central Universities National Natural Science Foundation of China
Subject
Fluid Flow and Transfer Processes,Computer Science Applications,Process Chemistry and Technology,General Engineering,Instrumentation,General Materials Science
Reference66 articles.
1. An overview of spoken language technology for education;Eskenazi;Speech Commun.,2009 2. Computer-assisted pronunciation training (CAPT): Current issues and future directions;Relc J.,2021 3. Kang, O., and Kermad, A. (2017). The Routledge Handbook of Contemporary English Pronunciation, Routledge. 4. Suprasegmental measures of accentedness and judgments of language learner proficiency in oral English;Kang;Mod. Lang. J.,2010 5. Vaswani, A., Shazeer, N., Parmar, N., Uszkoreit, J., Jones, L., Gomez, A.N., Kaiser, Ł., and Polosukhin, I. (2017, January 4–9). Attention is all you need. Proceedings of the 31st International Conference on Neural Information Processing Systems, Long Beach, CA, USA.
|
|