Difficulty-level modeling of ontology-based factual questions

Author:

Venugopal Vinu E.1,Kumar P. Sreenivasa2

Affiliation:

1. Computer Science and Communications Research Unit, University of Luxembourg, Belval, Luxembourg. E-mail: vinu.venugopal@uni.lu

2. Department of Computer Science and Engineering, Indian Institute of Technology Madras, Chennai, India. E-mail: psk@cse.iitm.ac.in

Abstract

Semantics-based knowledge representations such as ontologies are found to be very useful in automatically generating meaningful factual questions. Determining the difficulty-level of these system-generated questions is helpful to effectively utilize them in various educational and professional applications. The existing approach for predicting the difficulty-level of factual questions utilizes only few naive features and, its accuracy (F-measure) is found to be close to only 50% while considering our benchmark set of 185 questions. In this paper, we propose a new methodology for this problem by identifying new features and by incorporating an educational theory, related to difficulty-level of a question, called Item Response Theory (IRT). In the IRT, knowledge proficiency of end users (learners) are considered for assigning difficulty-levels, because of the assumptions that a given question is perceived differently by learners of various proficiency levels. We have done a detailed study on the features/factors of a question statement which could possibly determine its difficulty-level for three learner categories (experts, intermediates, and beginners). We formulate ontology-based metrics for the same. We then train three logistic regression models to predict the difficulty-level corresponding to the three learner categories. The output of these models is interpreted using the IRT to find a question’s overall difficulty-level. The accuracy of the three models based on cross-validation is found to be in satisfactory range (67–84%). The proposed model (containing three classifiers) outperforms the existing model by more than 20% in precision, recall and F1-score measures.

Publisher

IOS Press

Subject

Computer Networks and Communications,Computer Science Applications,Information Systems

Reference23 articles.

1. Medical Ontology Validation through Question Answering

2. T. Alsubait, B. Parsia and U. Sattler, Mining ontologies for analogy questions: A similarity-based approach, in: Proceedings of OWL: Experiences and Directions Workshop 2012, CEUR Workshop Proceedings, Vol. 849, CEUR-WS.org, 2012, http://ceur-ws.org/Vol-849/paper_32.pdf. doi:10.1.1.307.2674.

3. A similarity-based theory of controlling MCQ difficulty

4. T. Alsubait, B. Parsia and U. Sattler, Generating multiple choice questions from ontologies: Lessons learnt, in: Proceedings of the 11th International Workshop on OWL: Experiences and Directions (OWLED 2014), 2014, pp. 73–84, http://ceur-ws.org/Vol-1265/owled2014_submission_11.pdf. doi:10.1.1.661.8622.

5. T. Alsubait, B. Parsia and U. Sattler, Generating multiple choice questions from ontologies: Lessons learnt, in: Proceedings of the 11th International Workshop on OWL: Experiences and Directions (OWLED 2014), Vol. 1265, 2014, pp. 73–84.

Cited by 8 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

1. Difficulty-controllable question generation over knowledge graphs: A counterfactual reasoning approach;Information Processing & Management;2024-07

2. Research and Design of Automatic Questioning System Based on Question Generation;2024 12th International Conference on Information and Education Technology (ICIET);2024-03-18

3. Predicting Tacit Coordination Success Using Electroencephalogram Trajectories: The Impact of Task Difficulty;Sensors;2023-11-29

4. Text-based Question Difficulty Prediction: A Systematic Review of Automatic Approaches;International Journal of Artificial Intelligence in Education;2023-09-08

5. Semantics-Aware Document Retrieval for Government Administrative Data;International Journal of Semantic Computing;2023-07-31

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3