Abstract
AbstractDynamic assessments (DAs) of word reading skills demonstrate strong criterion reference validity with word reading measures (WRMs). However, DAs vary in the skills they assess, their format and administration method, and the type of words and symbols used in test items. These characteristics may have implications on assessment validity. To compare validity of DAs of word reading skills on these factors of interest, a systematic search of five databases and the grey literature was conducted. We identified 35 studies that met the inclusion criteria of evaluating participants aged 4-10, using a DA of word reading skills and reporting a Pearson’s correlation coefficient as an effect size. A random effects meta-analysis with robust variance estimation and subgroup analyses by DA characteristics was conducted. There were no significant differences in mean effect size based on administration method (computer vs. in-person) or symbol type (familiar vs. novel). However, DAs that evaluate phonological awareness or decoding (vs. sound-symbol knowledge), those that use a graduated prompt format (vs. test-teach-retest), and DAs that use nonwords (vs. real words) demonstrated significantly stronger correlations with WRMs. These results inform selection of DAs in clinical and research settings, and development of novel, valid DAs of word reading skills.
Publisher
Cold Spring Harbor Laboratory
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献