Abstract
AbstractOpen-book examinations (OBEs) will likely become increasingly important assessment tools. We investigated how access to open-book resources affected questions testing factual recall, which might be easy to look-up, versus questions testing higher-order cognitive domains. Few studies have investigated OBEs using modern Internet resources or as summative assessments. We compared performance on an examination conducted as a traditional closed-book exam (CBE) in 2019 (N = 320) and a remote OBE with free access to Internet resources in 2020 (N = 337) due to COVID-19. This summative, end-of-year assessment focused on basic science for second-year medical students. We categorized questions by Bloom’s taxonomy (‘Remember’, versus ‘Understand/Apply’). We predicted higher performance on the OBE, driven by higher performance on ‘Remember’ questions. We used an item-centric analysis by using performance per item over all examinees as the outcome variable in logistic regression, with terms ‘Open-Book, ‘Bloom Category’ and their interaction. Performance was higher on OBE questions than CBE questions (OR 2.2, 95% CI: 2.14–2.39), and higher on ‘Remember’ than ‘Understand/Apply’ questions (OR 1.13, 95% CI: 1.09–1.19). The difference in performance between ‘Remember’ and ‘Understand/Apply’ questions was greater in the OBE than the CBE (‘Open-Book’ * ‘Bloom Category’ interaction: OR 1.2, 95% CI: 1.19–1.37). Access to open-book resources had a greater effect on performance on factual recall questions than higher-order questions, though performance was higher in the OBE overall. OBE design must consider how searching for information affects performance, particularly on questions measuring different domains of knowledge.
Publisher
Springer Science and Business Media LLC
Subject
Education,General Medicine
Reference53 articles.
1. Agarwal, P. K., & Roediger, H. L. (2011). Expectancy of an open-book test decreases performance on a delayed closed-book test. Memory, 19(8), 836–852. https://doi.org/10.1080/09658211.2011.613840
2. Anderson, L. W., Krathwohl, D. R., & Bloom, B. S. (2001). A taxonomy for learning, teaching, and assessing: a revision of Bloom’s taxonomy of educational objectives. http://books.google.com/books?id=JPkXAQAAMAAJ&pgis=1. Accessed 4 January 2021
3. Baillie, C., & Toohey, S. (1997). The “power test”: Its impact on student learning in a materials science course for engineering students. Assessment and Evaluation in Higher Education, 22(1), 33–48. https://doi.org/10.1080/0260293970220103
4. Barsky, E., & Bar-Ilan, J. (2012). The impact of task phrasing on the choice of search keywords and on the search process and success. Journal of the American Society for Information Science and Technology, 63(10), 1987–2005. https://doi.org/10.1002/asi.22654
5. Bell, D. J., & Ruthven, L. (2004). Searcher’s assessments of task complexity for web searching. Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 2997). https://doi.org/10.1007/978-3-540-24752-4_5
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献