Affiliation:
1. Faculty of Humanities and Social Sciences, Khon Kaen University, Khon Kaen, THAILAND
Abstract
Automated essay scoring (AES) has become a valuable tool in educational settings, providing efficient and objective evaluations of student essays. However, the majority of AES systems have primarily focused on native English speakers, leaving a critical gap in the evaluation of non-native speakers’ writing skills. This research addresses this gap by exploring the effectiveness of automated essay-scoring methods specifically designed for non-native speakers. The study acknowledges the unique challenges posed by variations in language proficiency, cultural differences, and linguistic complexities when assessing non-native speakers’ writing abilities. This work focuses on the automated student assessment prize and Khon Kaen University academic English language test dataset and presents an approach that leverages variants of the long short-term memory network model to learn features and compare results with the Kappa coefficient. The findings demonstrate that the proposed framework and approach, which involve joint learning of different essay representations, yield significant benefits and achieve results comparable to state-of-the-art deep learning models. These results suggest that the novel text representation proposed in this paper holds promise as a new and effective choice for assessing the writing tasks of non-native speakers. The result of this study can apply to advance educational assessment practices and promote equitable opportunities for language learners worldwide by enhancing the evaluation process for non-native speakers
Subject
Management of Technology and Innovation,Education
Reference60 articles.
1. Ajay, H. B. (1973). Strategies for content analysis of essays by computer. University of Connecticut. https://search.proquest.com/openview/739b97ecbfd94af0356f4da011575ef8/1?pq-origsite=gscholar&cbl=18750&diss=y
2. Alikaniotis, D., Yannakoudakis, H., & Rei, M. (2016). Automatic text scoring using neural networks. arXiv, 1606, 04289. https://doi.org/10.18653/v1/P16-1068
3. Attali, Y., & Burstein, J. (2006). Automated essay scoring with e-rater® V.2.0. Journal of Technology, Learning, and Assessment, 4(3), i-21. https://doi.org/10.1002/j.2333-8504.2004.tb01972.x
4. Beseiso, M., & Alzahrani, S. (2020). An empirical analysis of BERT embedding for automated essay scoring. International Journal of Advanced Computer Science and Applications, 11(10), 204-210. https://doi.org/10.14569/IJACSA.2020.0111027
5. Chen, Z., & Zhou, Y. (2019). Research on automatic essay scoring of composition based on CNN and OR. In Proceedings of the 2nd International Conference on Artificial Intelligence and Big Data (pp. 13-18). IEEE. https://doi.org/10.1109/ICAIBD.2019.8837007