Abstract
AbstractIn this paper, we highlight the importance of distilling the computational assessments of constructed responses to validate the indicators/proxies of constructs/trins using an empirical illustration in automated summary evaluation. We present the validation of the Inbuilt Rubric (IR) method that maps rubrics into vector spaces for concepts’ assessment. Specifically, we improved and validated its scores’ performance using latent variables, a common approach in psychometrics. We also validated a new hierarchical vector space, namely a bifactor IR. 205 Spanish undergraduate students produced 615 summaries of three different texts that were evaluated by human raters and different versions of the IR method using latent semantic analysis (LSA). The computational scores were validated using multiple linear regressions and different latent variable models like CFAs or SEMs. Convergent and discriminant validity was found for the IR scores using human rater scores as validity criteria. While this study was conducted in the Spanish language, the proposed scheme is language-independent and applicable to any language. We highlight four main conclusions: (1) Accurate performance can be observed in topic-detection tasks without hundreds/thousands of pre-scored samples required in supervised models. (2) Convergent/discriminant validity can be improved using measurement models for computational scores as they adjust for measurement errors. (3) Nouns embedded in fragments of instructional text can be an affordable alternative to use the IR method. (4) Hierarchical models, like the bifactor IR, can increase the validity of computational assessments evaluating general and specific knowledge in vector space models. R code is provided to apply the classic and bifactor IR method.
Funder
Universidad Autónoma de Madrid
Publisher
Springer Science and Business Media LLC
Subject
General Psychology,Psychology (miscellaneous),Arts and Humanities (miscellaneous),Developmental and Educational Psychology,Experimental and Cognitive Psychology
Reference113 articles.
1. Abad, F.J., Olea, J., Ponsoda, V., & García, C. (2011). Medición en Ciencias Sociales y de la Salud [Measurement in Social and Health Sciences]. Síntesis.
2. Alenezi, H.S., & Faisal, M.H. (2020). Utilizing crowdsourcing and machine learning in education: Literature review. Education and Information Technologies, 25, 2971-2986. https://doi.org/10.1007/s10639-020-10102-w.
3. Asimov, I. (1969). Great Ideas of Science. Houghton Mifflin.
4. Attali, Y. (2014). Validity and Reliability of Automated Essay Scoring. In M.D. Shermis & J. Burnstein (Eds), Handbook of Automated Essay Evaluation: Current applications and new directions (pp.181-198). Routledge.
5. Bejar, I.I., Mislevy, R.J., & Zhang, M. (2016). Automated Scoring with Validity in Mind. In A.A. Rupp & J.P. Leighton (Eds.), The Wiley Handbook of Cognition and Assessment: Frameworks, Methodologies, and Applications (pp. 226-246). Wiley Blackwell. https://doi.org/10.1002/9781118956588.ch10.
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献