Affiliation:
1. Arizona State University
2. Georgia Institute of Technology
Abstract
Recent assessment research joining cognitive psychology and psychometric theory has introduced a new technology, item generation. In algorithmic item generation, items are systematically created based on specific combinations of features that underlie the processing required to correctly solve a problem. Reading comprehension items have been more difficult to model than other item types due to the complexities of quantifying text. However, recent developments in artificial intelligence for text analysis permit quantitative indices to represent cognitive sources of difficulty. The current study attempts to identify generative components for the Graduate Record Examination paragraph comprehension items through the cognitive decomposition of item difficulty. Text comprehension and decision processes accounted for a significant amount of the variance in item difficulties. The decision model variables contributed significantly to variance in item difficulties, whereas the text representation variables did not. Implications for score interpretation and future possibilities for item generation are discussed. Index terms: difficulty modeling, construct validity, comprehension tests, item generation
Subject
Psychology (miscellaneous),Social Sciences (miscellaneous)
Cited by
63 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献