Author:
Chaudhari Rupal,Patel Manish
Abstract
Automated Short Answer Grading (ASAG), more generally referred to as ASAG, is a method that evaluates the written short answers provided by students through the use of certain computer algorithms. This particular component of ASAG has been the subject of study for a considerable amount of time [4]. A significant obstacle in ASAG is the low availability of relevant training data inside the domain. This is one of the most significant obstacles. There are a number of different approaches that may be taken to address this problem. These approaches can be broadly classified into two categories: traditional methods that rely on handcrafted characteristics and Deep Learning-based approaches [22]. Over the course of the past five years, there has been a significant increase in the number of researchers in this field that have adopted Deep Learning techniques in order to address the ASAG challenge [6]. The purpose of this research is to determine whether or whether strategies based on Deep Learning are superior to traditional methods across 38 different publications. Additionally, the study intends to provide a full review of the many deep learning methodology that have been investigated by academics in order to address this issue [19]. In addition to this, the study provides an analysis of a number of state-of-the-art datasets that are ideal for ASAG tasks and makes recommendations for evaluation metrics that are suitable for regression and classification situations.
Reference31 articles.
1. Alikaniotis D., Yannakoudakis H., Rei M.: Automatic text scoring using neural networks. In: Proceedings of the 54th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers), Berlin, Germany, pp. 715–725. Association for Computational Linguistics, August 2016. https://doi.org/10.18653/v1/P16-1068. https://www.aclweb.org/anthology/P16-1068
2. Angelov P., Sperduti A.: Challenges in deep learning. In: ESANN (2016)
3. Powergrading: a Clustering Approach to Amplify Human Effort for Short Answer Grading
4. Beltagy I., Peters M.E., Cohan A.: Longformer: the long-document transformer. arXiv preprint arXiv:2004.05150 (2020)
5. Benesty J., Chen J., Huang Y., Cohen I.: Pearson correlation coefficient. In: Benesty J., Chen J., Huang Y., Cohen I. (eds.) Noise Reduction in Speech Processing. STSP, vol. 2, pp. 1–4. Springer, Heidelberg (2009). https://doi.org/10.1007/978-3-642-00296-05