Abstract
AbstractProblem-solving and higher-order learning are goals of higher education. It has been repeatedly suggested that multiple-choice questions (MCQs) can be used to test higher-order learning, although objective empirical evidence is lacking and MCQs are often criticised for assessing only lower-order, factual, or ‘rote’ learning. These challenges are compounded by a lack of agreement on what constitutes higher order learning: it is normally defined subjectively using heavily criticised frameworks such as such as Bloom’s taxonomy. There is also a lack of agreement on how to write MCQs which assess higher order learning. Here we tested guidance for the creation of MCQs to assess higher-order learning, by evaluating the performance of students who were subject matter novices, vs experts. We found that questions written using the guidance were much harder to answer when students had no prior subject knowledge, whereas lower-order questions could be answered by simply searching online. These findings suggest that questions written using the guidance do indeed test higher-order learning, and such MCQs may be a valid alternative to other written assessment formats designed to test higher-order learning, such as essays, where reliability and cheating are a major concern.
Publisher
Springer Science and Business Media LLC
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献