Affiliation:
1. George Washington University
2. Northwestern University
Abstract
Clearinghouses set standards of scientific quality to vet existing research to determine how “evidence-based” an intervention is. This paper examines 12 educational clearinghouses to describe their effectiveness criteria, to estimate how consistently they rate the same program, and to probe why their judgments differ. All the clearinghouses value random assignment, but they differ in how they treat its implementation, how they weight quasi-experiments, and how they value ancillary causal factors like independent replication and persisting effects. A total of 1359 programs were analyzed over 10 clearinghouses; 83% of them were assessed by a single clearinghouse and, of those rated by more than one, similar ratings were achieved for only about 30% of the programs. This high level of inconsistency seems to be mostly due to clearinghouses disagreeing about whether a high program rating requires effects that are replicated and/or temporally persisting. Clearinghouses exist to identify “evidence-based” programs, but the inconsistency in their recommendations of the same program suggests that identifying “evidence-based” interventions is still more of a policy aspiration than a reliable research practice.
Funder
National Science Foundation
Publisher
American Educational Research Association (AERA)
Reference40 articles.
1. Best Evidence Encyclopedia. (n.d.). https://bestevidence.org/about/
2. Blueprints for Healthy Youth Development. (n.d.). Blueprints standards. https://www.blueprintsprograms.org/blueprints-standards/
3. A Multisite Cluster Randomized Field Trial of Open Court Reading
4. Final Reading Outcomes of the National Randomized Field Trial of Success for All
5. California Evidence Based Clearinghouse. (n.d.a). CEBC rating policy and procedures manual. https://www.cebc4cw.org/registry/how-are-programs-on-the-cebc-reviewed
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献