Abstract
AbstractThe current study sought to examine the validity of a General English Achievement Test (GEAT), administered to university students in the fall semester of 2018–2019 academic year, by hybridizing differential information (DIF) and differential distractor function (DDF) analytical models. Using a purposive sampling method, from the target population of undergraduate students studying in different disciplines at Islamic Azad University (IAU), (Isfahan branch), a sample of 835 students taking GEAT were selected. The 60-item multiple-choice test comprised four sub-sections; namely, vocabulary, grammar, cloze test, and reading comprehension. The students’ test scores served as the targeted data and the validity of the test was examined through the application of Cochran-Mantel-Haenszel (CMH) and multinomial log-linear regression models for detecting DIF and DDF, respectively. To account for the assumption of uni-dimensionality, the test sub-sections were analyzed independently. Furthermore, the assumption of local independence was checked based on correlational analysis and no extreme values were observed. The results of the study identified five moderate-level DIF items and one DDF item signaling an adverse effect on test fairness due to the existing biased items. Notably, these findings may have important implications for both language policymakers and test developers.
Publisher
Springer Science and Business Media LLC
Subject
Linguistics and Language,Language and Linguistics
Reference69 articles.
1. Ackerman, T. A. (1992). A didactic explanation of item bias, item impact, and item validity from a multidimensional perspective. Journal of Educational Measurement, 29(1), 67–91. https://doi.org/10.1111/j.1745-3984.1992.tb00368.x.
2. Angoff, W. H. (1993). Perspectives on differential item functioning methodology.
3. Banks, K. (2009). Using DDF in a post hoc analysis to understand sources of DIF. Educational Assessment, 14(2), 103–118. https://doi.org/10.1080/10627190903035229.
4. Belzak, W., & Bauer, D. J. (2020). Improving the assessment of measurement invariance: Using regularization to select anchor items and identify differential item functioning. Psychological Methods., 25(6), 673–690. https://doi.org/10.1037/met0000253.
5. Bond, T. (2003). Validity and assessment: a Rasch measurement perspective. Metodoliga de las Ciencias del Comportamento, 5(2), 179–194.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献