The Impact of Systematic Review Automation Tools on Methodological Quality and Time Taken to Complete Systematic Review Tasks: Case Study

Author:

Clark JustinORCID,McFarlane CatherineORCID,Cleo GinaORCID,Ishikawa Ramos ChristianeORCID,Marshall SkyeORCID

Abstract

Background Systematic reviews (SRs) are considered the highest level of evidence to answer research questions; however, they are time and resource intensive. Objective When comparing SR tasks done manually, using standard methods, versus those same SR tasks done using automated tools, (1) what is the difference in time to complete the SR task and (2) what is the impact on the error rate of the SR task? Methods A case study compared specific tasks done during the conduct of an SR on prebiotic, probiotic, and synbiotic supplementation in chronic kidney disease. Two participants (manual team) conducted the SR using current methods, comprising a total of 16 tasks. Another two participants (automation team) conducted the tasks where a systematic review automation (SRA) tool was available, comprising of a total of six tasks. The time taken and error rate of the six tasks that were completed by both teams were compared. Results The approximate time for the manual team to produce a draft of the background, methods, and results sections of the SR was 126 hours. For the six tasks in which times were compared, the manual team spent 2493 minutes (42 hours) on the tasks, compared to 708 minutes (12 hours) spent by the automation team. The manual team had a higher error rate in two of the six tasks—regarding Task 5: Run the systematic search, the manual team made eight errors versus three errors made by the automation team; regarding Task 12: Assess the risk of bias, 25 assessments differed from a reference standard for the manual team compared to 20 differences for the automation team. The manual team had a lower error rate in one of the six tasks—regarding Task 6: Deduplicate search results, the manual team removed one unique study and missed zero duplicates versus the automation team who removed two unique studies and missed seven duplicates. Error rates were similar for the two remaining compared tasks—regarding Task 7: Screen the titles and abstracts and Task 9: Screen the full text, zero relevant studies were excluded by both teams. One task could not be compared between groups—Task 8: Find the full text. Conclusions For the majority of SR tasks where an SRA tool was used, the time required to complete that task was reduced for novice researchers while methodological quality was maintained.

Publisher

JMIR Publications Inc.

Subject

Computer Science Applications,Education

Reference26 articles.

1. ColemanKNorrisSWestonAGrimmer-SomersKHillierSMerlinTMiddletonPTooherRSalisburyJNHMRC Additional Levels of Evidence and Grades for Recommendations for Developers of Guidelines20092021-05-11Canberra, AustraliaNational Health and Medical Research Council (NHMRC)https://www.mja.com.au/sites/default/files/NHMRC.levels.of.evidence.2008-09.pdf

2. How to conduct systematic reviews more expeditiously?

3. Wasted research when systematic reviews fail to provide a complete and up-to-date evidence synthesis: the example of lung cancer

4. Analysis of the time and workers needed to conduct systematic reviews of medical interventions using data from the PROSPERO registry

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3