Affiliation:
1. University of Georgia
2. Zhejiang Normal University
Abstract
AbstractMultidimensional scoring evaluates each constructed‐response answer from more than one rating dimension and/or trait such as lexicon, organization, and supporting ideas instead of only one holistic score, to help students distinguish between various dimensions of writing quality. In this work, we present a bilevel learning model for combining two objectives, the multidimensional automated scoring, and the students’ writing structure analysis and interpretation. The dual objectives are enabled by a supervised model, called Latent Dirichlet Allocation Multitask Learning (LDAMTL), integrating a topic model and a multitask learning model with an attention mechanism. Two empirical data sets were employed to indicate LDAMTL model performance. On one hand, results suggested that LDAMTL owns better scoring and QW‐κ values than two other competitor models, the supervised latent Dirichlet allocation, and Bidirectional Encoder Representations from Transformers at the 5% significance level. On the other hand, extracted topic structures revealed that students with a higher language score tended to employ more compelling words to support the argument in their answers. This study suggested that LDAMTL not only demonstrates the model performance by conjugating the underlying shared representation of each topic and learned representation from the neural networks but also helps understand students’ writing.
Reference28 articles.
1. A Ranking Method for Evaluating Constructed Responses
2. Neural machine translation by jointly learning to align and translate;Bahdanau D.;Proceedings of ICLR,2015
3. Probabilistic topic models
4. 10.1162/jmlr.2003.3.4-5.993
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献