Distilling vector space model scores for the assessment of constructed responses with bifactor Inbuilt Rubric method and latent variables

Author:

Martínez-Huertas José ÁngelORCID,Olmos Ricardo,Jorge-Botana Guillermo,León José A.

Abstract

AbstractIn this paper, we highlight the importance of distilling the computational assessments of constructed responses to validate the indicators/proxies of constructs/trins using an empirical illustration in automated summary evaluation. We present the validation of the Inbuilt Rubric (IR) method that maps rubrics into vector spaces for concepts’ assessment. Specifically, we improved and validated its scores’ performance using latent variables, a common approach in psychometrics. We also validated a new hierarchical vector space, namely a bifactor IR. 205 Spanish undergraduate students produced 615 summaries of three different texts that were evaluated by human raters and different versions of the IR method using latent semantic analysis (LSA). The computational scores were validated using multiple linear regressions and different latent variable models like CFAs or SEMs. Convergent and discriminant validity was found for the IR scores using human rater scores as validity criteria. While this study was conducted in the Spanish language, the proposed scheme is language-independent and applicable to any language. We highlight four main conclusions: (1) Accurate performance can be observed in topic-detection tasks without hundreds/thousands of pre-scored samples required in supervised models. (2) Convergent/discriminant validity can be improved using measurement models for computational scores as they adjust for measurement errors. (3) Nouns embedded in fragments of instructional text can be an affordable alternative to use the IR method. (4) Hierarchical models, like the bifactor IR, can increase the validity of computational assessments evaluating general and specific knowledge in vector space models. R code is provided to apply the classic and bifactor IR method.

Funder

Universidad Autónoma de Madrid

Publisher

Springer Science and Business Media LLC

Subject

General Psychology,Psychology (miscellaneous),Arts and Humanities (miscellaneous),Developmental and Educational Psychology,Experimental and Cognitive Psychology

Reference113 articles.

1. Abad, F.J., Olea, J., Ponsoda, V., & García, C. (2011). Medición en Ciencias Sociales y de la Salud [Measurement in Social and Health Sciences]. Síntesis.

2. Alenezi, H.S., & Faisal, M.H. (2020). Utilizing crowdsourcing and machine learning in education: Literature review. Education and Information Technologies, 25, 2971-2986. https://doi.org/10.1007/s10639-020-10102-w.

3. Asimov, I. (1969). Great Ideas of Science. Houghton Mifflin.

4. Attali, Y. (2014). Validity and Reliability of Automated Essay Scoring. In M.D. Shermis & J. Burnstein (Eds), Handbook of Automated Essay Evaluation: Current applications and new directions (pp.181-198). Routledge.

5. Bejar, I.I., Mislevy, R.J., & Zhang, M. (2016). Automated Scoring with Validity in Mind. In A.A. Rupp & J.P. Leighton (Eds.), The Wiley Handbook of Cognition and Assessment: Frameworks, Methodologies, and Applications (pp. 226-246). Wiley Blackwell. https://doi.org/10.1002/9781118956588.ch10.

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3