Affiliation:
1. Carnegie Mellon University,
2. Carnegie Mellon University
Abstract
Interest in end-of-year accountability exams has increased dramatically since the passing of the No Child Left Behind Act in 2001. With this increased interest comes a desire to use student data collected throughout the year to estimate student proficiency and predict how well they will perform on end-of-year exams. This article uses student performance on the Assistment System, an online mathematics tutor, to show that replacing percentage correct with an Item Response Theory estimate of student proficiency leads to better fitting prediction models. In addition, it uses other tutor performance metrics to further increase prediction accuracy. Prediction error bounds are also calculated to attain an absolute measure to which the models can be compared.
Subject
Applied Mathematics,Applied Psychology,Developmental and Educational Psychology,Education
Cited by
10 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献