Abstract
AbstractSystematic reviews and meta-analyses are crucial for advancing research, yet they are time-consuming and resource-demanding. Although machine learning and natural language processing algorithms may reduce this time and these resources, their performance has not been tested in education and educational psychology, and there is a lack of clear information on when researchers should stop the reviewing process. In this study, we conducted a retrospective screening simulation using 27 systematic reviews in education and educational psychology. We evaluated the sensitivity, specificity, and estimated time savings of several learning algorithms and heuristic stopping criteria. The results showed, on average, a 58% (SD = 19%) reduction in the screening workload of irrelevant records when using learning algorithms for abstract screening and an estimated time savings of 1.66 days (SD = 1.80). The learning algorithm random forests with sentence bidirectional encoder representations from transformers outperformed other algorithms. This finding emphasizes the importance of incorporating semantic and contextual information during feature extraction and modeling in the screening process. Furthermore, we found that 95% of all relevant abstracts within a given dataset can be retrieved using heuristic stopping rules. Specifically, an approach that stops the screening process after classifying 20% of records and consecutively classifying 5% of irrelevant papers yielded the most significant gains in terms of specificity (M = 42%, SD = 28%). However, the performance of the heuristic stopping criteria depended on the learning algorithm used and the length and proportion of relevant papers in an abstract collection. Our study provides empirical evidence on the performance of machine learning screening algorithms for abstract screening in systematic reviews in education and educational psychology.
Publisher
Springer Science and Business Media LLC
Reference79 articles.
1. Anmarkrud, Ø., Bråten, I., Florit, E., & Mason, L. (2022). The role of individual differences in sourcing: A systematic review. Educational Psychology Review, 34(2), 749–792. https://doi.org/10.1007/s10648-021-09640-7
2. ASReview LAB. (2023). ASReview LAB - A tool for AI-assisted systematic reviews [Computer software]. Zenodo. https://doi.org/10.5281/zenodo.7672035
3. Backfisch, I., Schneider, J., Lachner, A., Scheiter, K., & Scherer, R. (2020). Another jingle-jangle fallacy?Examining the validity of Technological Pedagogical and Content Knowledge (TPACK) self-report assessments. https://www.psycharchives.org/en/item/50b6f757-52d3-4902-863a-d833279f3ce2
4. Bishop, C. M. (2006). Pattern recognition and machine learning (Vol. 4, Issue 4). Springer.
5. Blömeke, S., Gustafsson, J.-E., & Shavelson, R. J. (2015). Beyond dichotomies: Competence viewed as a continuum. Zeitschrift Für Psychologie, 223(1), 3–13. https://doi.org/10.1027/2151-2604/a000194
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献