Author:
Alrajhi Laila,Alamri Ahmed,Pereira Filipe Dwan,Cristea Alexandra I.,Oliveira Elaine H. T.
Abstract
AbstractIn MOOCs, identifying urgent comments on discussion forums is an ongoing challenge. Whilst urgent comments require immediate reactions from instructors, to improve interaction with their learners, and potentially reducing drop-out rates—the task is difficult, as truly urgent comments are rare. From a data analytics perspective, this represents a highly unbalanced (sparse) dataset. Here, we aim to automate the urgent comments identification process, based on fine-grained learner modelling—to be used for automatic recommendations to instructors. To showcase and compare these models, we apply them to the first gold standard dataset for Urgent iNstructor InTErvention (UNITE), which we created by labelling FutureLearn MOOC data. We implement both benchmark shallow classifiers and deep learning. Importantly, we not only compare, for the first time for the unbalanced problem, several data balancing techniques, comprising text augmentation, text augmentation with undersampling, and undersampling, but also propose several new pipelines for combining different augmenters for text augmentation. Results show that models with undersampling can predict most urgent cases; and 3X augmentation + undersampling usually attains the best performance. We additionally validate the best models via a generic benchmark dataset (Stanford). As a case study, we showcase how the naïve Bayes with count vector can adaptively support instructors in answering learner questions/comments, potentially saving time or increasing efficiency in supporting learners. Finally, we show that the errors from the classifier mirrors the disagreements between annotators. Thus, our proposed algorithms perform at least as well as a ‘super-diligent’ human instructor (with the time to consider all comments).
Publisher
Springer Science and Business Media LLC
Subject
Computer Science Applications,Human-Computer Interaction,Education
Reference60 articles.
1. Agrawal, A., Paepcke, A.: The stanford moocposts data set. https://Datastage.Stanford.Edu/Stanfordmoocposts/
2. Agrawal, A., Venkatraman, J., Leonard, S., Paepcke, A.: Youedu: addressing confusion in MOOC discussion forums by recommending instructional video clips. In: The 8th international conference on educational data mining (2015).
3. Ahmadaliev, D.K., Medatov, A.A., Jo’rayev, M.M., O’rinov, N.T.: Adaptive educational hypermedia systems: an overview of current trend of adaptive content representation and sequencing. Theoret. Appl. Sci. 3, 58–61 (2019)
4. Alamri, A., Alshehri, M., Cristea, A., Pereira, F. D., Oliveira, E., Shi, L., Stewart, C. Predicting MOOCS dropout using only two easily obtainable features from the first week’s activities. In: International Conference on Intelligent Tutoring Systems, 2019. Springer, 163–173.
5. Almatrafi, O., Johri, A.: Systematic review of discussion forums in massive open online courses (Moocs). IEEE Trans. Learn. Technol. 12, 413–428 (2018)
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献