TrachGPT: Appraisal of tracheostomy care recommendations from an artificial intelligent Chatbot

Author:

Ayo‐Ajibola Oluwatobiloba1ORCID,Davis Ryan J.1ORCID,Lin Matthew E.2ORCID,Vukkadala Neelaysh2,O'Dell Karla3,Swanson Mark S.3,Johns Michael M.3,Shuman Elizabeth A.3ORCID

Affiliation:

1. Keck School of Medicine of the University of Southern California Los Angeles California USA

2. Department of Head and Neck Surgery University of California Los Angeles Los Angeles California USA

3. Department of Otolaryngology‐Head and Neck Surgery University of Southern California Caruso Los Angeles California USA

Abstract

AbstractObjectiveSafe home tracheostomy care requires engagement and troubleshooting by patients, who may turn to online, AI‐generated information sources. This study assessed the quality of ChatGPT responses to such queries.MethodsIn this cross‐sectional study, ChatGPT was prompted with 10 hypothetical tracheostomy care questions in three domains (complication management, self‐care advice, and lifestyle adjustment). Responses were graded by four otolaryngologists for appropriateness, accuracy, and overall score. The readability of responses was evaluated using the Flesch Reading Ease (FRE) and Flesch–Kincaid Reading Grade Level (FKRGL). Descriptive statistics and ANOVA testing were performed with statistical significance set to p < .05.ResultsOn a scale of 1–5, with 5 representing the greatest appropriateness or overall score and a 4‐point scale with 4 representing the highest accuracy, the responses exhibited moderately high appropriateness (mean = 4.10, SD = 0.90), high accuracy (mean = 3.55, SD = 0.50), and moderately high overall scores (mean = 4.02, SD = 0.86). Scoring between response categories (self‐care recommendations, complication recommendations, lifestyle adjustments, and special device considerations) revealed no significant scoring differences. Suboptimal responses lacked nuance and contained incorrect information and recommendations. Readability indicated college and advanced levels for FRE (Mean = 39.5, SD = 7.17) and FKRGL (Mean = 13.1, SD = 1.47), higher than the sixth‐grade level recommended for patient‐targeted resources by the NIH.ConclusionWhile ChatGPT‐generated tracheostomy care responses may exhibit acceptable appropriateness, incomplete or misleading information may have dire clinical consequences. Further, inappropriately high reading levels may limit patient comprehension and accessibility. At this point in its technological infancy, AI‐generated information should not be solely relied upon as a direct patient care resource.

Publisher

Wiley

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3