A Novel Question-Answering Framework for Automated Citation Screening Using Large Language Models

Author:

Akinseloyin Opeoluwa,Jiang XiaoruiORCID,Palade VasileORCID

Abstract

AbstractObjectiveThis paper aims to address the challenges in citation screening (a.k.a. abstract screening) within Systematic Reviews (SR) by leveraging the zero-shot capabilities of large language models, particularly ChatGPT.MethodsWe employ ChatGPT as a zero-shot ranker to prioritize candidate studies by aligning abstracts with the selection criteria outlined in an SR protocol. Citation screening was transformed into a novel question-answering (QA) framework, treating each selection criterion as a question addressed by ChatGPT. The framework involves breaking down the selection criteria into multiple questions, properly prompting ChatGPT to answer each question, scoring and re-ranking each answer, and combining the responses to make nuanced inclusion or exclusion decisions.ResultsLarge-scale validation was performed on the benchmark of CLEF eHealth 2019 Task 2: Technology Assisted Reviews in Empirical Medicine. Across 31 datasets of four categories of SRs, the proposed QA framework consistently outperformed other zero-shot ranking models. Compared with complex ranking approaches with iterative relevance feedback and fine-tuned deep learning-based ranking models, our ChatGPT-based zero-shot citation screening approaches still demonstrated competitive and sometimes better results, underscoring their high potential in facilitating automated systematic reviews.ConclusionInvestigation justified the indispensable value of leveraging selection criteria to improve the performance of automated citation screening. ChatGPT demonstrated proficiency in prioritizing candidate studies for citation screening using the proposed QA framework. Significant performance improvements were obtained by re-ranking answers using the semantic alignment between abstracts and selection criteria. This further highlighted the pertinence of utilizing selection criteria to enhance citation screening.

Publisher

Cold Spring Harbor Laboratory

Reference75 articles.

1. Systematic review automation technologies;Systematic reviews,2014

2. Systematic reviews and meta-analysis: understanding the best evidence in primary healthcare;Journal of family medicine and primary care,2013

3. The rationale behind systematic reviews in clinical medicine: a conceptual framework;Journal of Diabetes & Metabolic Disorders,2021

4. Use of cost-effectiveness analysis to compare the efficiency of study identification methods in systematic reviews;Systematic reviews,2016

5. The significant cost of systematic reviews and meta-analyses: a call for greater involvement of machine learning to assess the promise of clinical trials;Contemporary clinical trials communications,2019

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3