Accuracy of Treatment Recommendations by Pragmatic Evidence Search and Artificial Intelligence: An Exploratory Study

Author:

Baig Zunaira1,Lawrence Daniel2,Ganhewa Mahen2,Cirillo Nicola123ORCID

Affiliation:

1. Melbourne Dental School, The University of Melbourne, 720 Swanston Street, Carlton, VIC 3053, Australia

2. CoTreat Pty Ltd., Melbourne, VIC 3000, Australia

3. School of Dentistry, University of Jordan, Amman 11733, Jordan

Abstract

There is extensive literature emerging in the field of dentistry with the aim to optimize clinical practice. Evidence-based guidelines (EBGs) are designed to collate diagnostic criteria and clinical treatment for a range of conditions based on high-quality evidence. Recently, advancements in Artificial Intelligence (AI) have instigated further queries into its applicability and integration into dentistry. Hence, the aim of this study was to develop a model that can be used to assess the accuracy of treatment recommendations for dental conditions generated by individual clinicians and the outcomes of AI outputs. For this pilot study, a Delphi panel of six experts led by CoTreat AI provided the definition and developed evidence-based recommendations for subgingival and supragingival calculus. For the rapid review—a pragmatic approach that aims to rapidly assess the evidence base using a systematic methodology—the Ovid Medline database was searched for subgingival and supragingival calculus. Studies were selected and reported based on the Preferred Reporting Items for Systematic Reviews and Meta-analyses (PRISMA), and this study complied with the minimum requirements for completing a restricted systematic review. Treatment recommendations were also searched for these same conditions in ChatGPT (version 3.5 and 4) and Bard (now Gemini). Adherence to the recommendations of the standard was assessed using qualitative content analysis and agreement scores for interrater reliability. Treatment recommendations by AI programs generally aligned with the current literature, with an agreement of up to 75%, although data sources were not provided by these tools, except for Bard. The clinician’s rapid review results suggested several procedures that may increase the likelihood of overtreatment, as did GPT4. In terms of overall accuracy, GPT4 outperformed all other tools, including rapid review (Cohen’s kappa 0.42 vs. 0.28). In summary, this study provides preliminary observations for the suitability of different evidence-generating methods to inform clinical dental practice.

Publisher

MDPI AG

Cited by 1 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3