Affiliation:
1. Leibniz Institute for Research and Information in Education
2. Boston College
3. Educational Research Institute
4. Leibniz Institute for Research and Information in Education
Abstract
AbstractFor assessment scales applied to different groups (e.g., students from different states; patients in different countries), multigroup differential item functioning (MG‐DIF) needs to be evaluated in order to ensure that respondents with the same trait level but from different groups have equal response probabilities on a particular item. The current study compares two approaches for DIF detection: a multiple‐group item response theory (MG‐IRT) model and a generalized linear mixed model (GLMM). In the MG‐IRT model approach, item parameters are constrained to be equal across groups and DIF is evaluated for each item in each group. In the GLMM, groups are treated as random, and item difficulties are modeled as correlated random effects with a joint multivariate normal distribution. Its nested structure allows the estimation of item difficulty variances and covariances at the group level. We use an excerpt from the PISA 2015 reading domain as an exemplary empirical investigation, and conduct a simulation study to compare the performance of the two approaches. Results from the empirical investigation show that the detection of countries with DIF is similar in both approaches. Results from the simulation study confirm this finding and indicate slight advantages of the MG‐IRT model approach.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献