Affiliation:
1. Bolu Abant İzzet Baysal Üniversitesi
2. MİLLİ SAVUNMA ÜNİVERSİTESİ
3. ADIYAMAN ÜNİVERSİTESİ
Abstract
The aim of the study was to examine the common items in the mixed format (e.g., multiple-choices and essay items) contain parameter drifts in the test equating processes performed with the common item non-equivalent groups design. In this study, which was carried out using Monte Carlo simulation with a fully crossed design, the factors of test length (30 and 50), sample size (1000 and 3000), common item ratio (30 and 40%), ratio of items with item parameter drift (IPD) in common items (20 and 30%), location of common items in tests (at the beginning, randomly distributed, and at the end) and IPD size in multiple-choice items (low [0.2] and high [1.0]) were studied. Four test forms were created, and two test forms do not contain parameter drifts. After the parameter drift was performed on the first of the other two test forms, the parameter drift was again performed on the second test form. Test equating results were compared using the root mean squared error (RMSE) value. As a result of the research, ratio of items with IPD in common items, IPD size in multiple-choice items, common item ratio, sample size and test length on equating errors were found to be significant.
Publisher
Participatory Educational Research (Per)
Subject
Developmental and Educational Psychology,Education
Reference65 articles.
1. Angoff, W. H. (1971). Scales, norms and equivalent scores. In R. L. Thorndike (Ed.), Educational measurement (pp. 508-600). Washington: American Council on Education.
2. Arce-Ferrer, A. J., & Bulut, O. (2017). Investigating separate and concurrent approaches for item parameter drift in 3PL item response theory equating. International Journal of Testing, 17(1), 1-22. doi:10.1080/15305058.2016.1227825
3. Babcock, B., & Albano, A. D. (2012). Rasch scale stability in the presence of item parameter and trait drift. Applied Psychological Measurement, 36(7), 565–580. doi:10.1177/0146621612455090
4. Bulut, O., & Sunbul, O. (2017). Monte Carlo simulation studies in Item Response Theory with the R programming language. Journal of Measurement and Evaluation in Education and Psychology, 8(3), 266-287. doi:10.21031/epod.305821
5. Bock, D. B., Muraki, E., & Pfeiffenberger, W. (1988). Item pool maintenance in the presence of item parameter drift. Journal of Educational Measurement, 25(4), 275-285. http://www.jstor.org/stable/1434961