Abstract
Abstract
Estimating the difficulty of reading texts is critical in second language education and assessment. This study was aimed at examining various text features that might influence the difficulty level of a high-stakes reading comprehension test and predict test takers’ scores. To this end, the responses provided by 17900 test takers on the reading comprehension subsection of a major high-stakes test, the Iranian National University Entrance Exam for the Master’s Program were examined. Overall, 63 reading passages in different versions of the test from 2017-2019 were studied with a focus on 16 indices that might help explain the reading difficulty and test takers’ scores. The results showed that the content word overlap index and the Flesch-Kincaid Reading Ease formula had significant correlations with the observed difficulty and could therefore be considered better predictors of test difficulty compared to other variables. The findings suggest the use of various indices to estimate the reading difficulty before administering tests to ensure the equivalency and validity of tests.
Publisher
Research Square Platform LLC
Reference56 articles.
1. An exploratory study into the construct validity of a reading comprehension test: Triangulation of data sources;Anderson NJ;Language Testing,1991
2. Baayen, R. H., Piepenbrock, R., & Gulikers, L. (1996). The CELEX lexical database (cd-rom). Linguistic Data Consortium
3. Task and ability analysis as a basis for examining content and construct comparability in two EFL proficiency test batteries;Bachman LF;Language Testing,1988
4. The linguistic assumptions underlying readability formulae: A critique;Bailin A;Language & Communication,2001
5. Biber, D. (1991). Variation across speech and writing. Cambridge University Press