Comparative study of ChatGPT and human evaluators on the assessment of medical literature according to recognised reporting standards

Author:

Roberts Richard HRORCID,Ali Stephen R,Hutchings Hayley A,Dobbs Thomas D,Whitaker Iain S

Abstract

IntroductionAmid clinicians’ challenges in staying updated with medical research, artificial intelligence (AI) tools like the large language model (LLM) ChatGPT could automate appraisal of research quality, saving time and reducing bias. This study compares the proficiency of ChatGPT3 against human evaluation in scoring abstracts to determine its potential as a tool for evidence synthesis.MethodsWe compared ChatGPT’s scoring of implant dentistry abstracts with human evaluators using the Consolidated Standards of Reporting Trials for Abstracts reporting standards checklist, yielding an overall compliance score (OCS). Bland-Altman analysis assessed agreement between human and AI-generated OCS percentages. Additional error analysis included mean difference of OCS subscores, Welch’s t-test and Pearson’s correlation coefficient.ResultsBland-Altman analysis showed a mean difference of 4.92% (95% CI 0.62%, 0.37%) in OCS between human evaluation and ChatGPT. Error analysis displayed small mean differences in most domains, with the highest in ‘conclusion’ (0.764 (95% CI 0.186, 0.280)) and the lowest in ‘blinding’ (0.034 (95% CI 0.818, 0.895)). The strongest correlations between were in ‘harms’ (r=0.32, p<0.001) and ‘trial registration’ (r=0.34, p=0.002), whereas the weakest were in ‘intervention’ (r=0.02, p<0.001) and ‘objective’ (r=0.06, p<0.001).ConclusionLLMs like ChatGPT can help automate appraisal of medical literature, aiding in the identification of accurately reported research. Possible applications of ChatGPT include integration within medical databases for abstract evaluation. Current limitations include the token limit, restricting its usage to abstracts. As AI technology advances, future versions like GPT4 could offer more reliable, comprehensive evaluations, enhancing the identification of high-quality research and potentially improving patient outcomes.

Funder

Scar Free Foundation

British Association of Plastic, Reconstructive and Aesthetic Surgeons

Welsh Clinical Academic Training Fellowship

Swansea University

Publisher

BMJ

Subject

Health Information Management,Health Informatics,Computer Science Applications

Reference10 articles.

1. Benefits, limits, and risks of GPT-4 as an AI Chatbot for medicine;Lee;N Engl J Med,2023

2. Brown TB , Mann B , Ryder N , et al . Language models are few-shot learners. 2020. Available: http://arxiv.org/abs/2005.14165

3. Raffel C , Shazeer N , Roberts A , et al . Exploring the limits of transfer learning with a unified text-to-text transformer. 2020. Available: http://arxiv.org/abs/1910.10683

4. Sanmarchi F , Bucci A , Golinelli D . A step-by-step researcher’s guide to the use of an Ai-based transformer in epidemiology: an exploratory analysis of Chatgpt using the Strobe checklist for observational studies. Z Gesundh Wiss [Preprint] 2023. doi:10.1101/2023.02.06.23285514

5. Reporting quality of abstracts of randomized controlled trials related to implant dentistry;Menne;J Periodontol,2021

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3