Affiliation:
1. Morsani College of Medicine, University of South Florida, Tampa, Florida, USA
2. Department of Orthopaedic Surgery, University of South Florida, Tampa, Florida, USA
3. Orthopaedic Trauma Service, Florida Orthopedic Institute, Tampa, Florida, USA
Abstract
AimsWhile internet search engines have been the primary information source for patients’ questions, artificial intelligence large language models like ChatGPT are trending towards becoming the new primary source. The purpose of this study was to determine if ChatGPT can answer patient questions about total hip (THA) and knee arthroplasty (TKA) with consistent accuracy, comprehensiveness, and easy readability.MethodsWe posed the 20 most Google-searched questions about THA and TKA, plus ten additional postoperative questions, to ChatGPT. Each question was asked twice to evaluate for consistency in quality. Following each response, we responded with, “Please explain so it is easier to understand,” to evaluate ChatGPT’s ability to reduce response reading grade level, measured as Flesch-Kincaid Grade Level (FKGL). Five resident physicians rated the 120 responses on 1 to 5 accuracy and comprehensiveness scales. Additionally, they answered a “yes” or “no” question regarding acceptability. Mean scores were calculated for each question, and responses were deemed acceptable if ≥ four raters answered “yes.”ResultsThe mean accuracy and comprehensiveness scores were 4.26 (95% confidence interval (CI) 4.19 to 4.33) and 3.79 (95% CI 3.69 to 3.89), respectively. Out of all the responses, 59.2% (71/120; 95% CI 50.0% to 67.7%) were acceptable. ChatGPT was consistent when asked the same question twice, giving no significant difference in accuracy (t = 0.821; p = 0.415), comprehensiveness (t = 1.387; p = 0.171), acceptability (χ2 = 1.832; p = 0.176), and FKGL (t = 0.264; p = 0.793). There was a significantly lower FKGL (t = 2.204; p = 0.029) for easier responses (11.14; 95% CI 10.57 to 11.71) than original responses (12.15; 95% CI 11.45 to 12.85).ConclusionChatGPT answered THA and TKA patient questions with accuracy comparable to previous reports of websites, with adequate comprehensiveness, but with limited acceptability as the sole information source. ChatGPT has potential for answering patient questions about THA and TKA, but needs improvement.Cite this article: Bone Jt Open 2024;5(2):139–146.
Publisher
British Editorial Society of Bone & Joint Surgery
Reference41 articles.
1. Training language models to follow instructions with human feedback;Ouyang;arXiv,2022
2. Language models are few-shot learners;Brown;arXiv,2020
3. Hu
K
.
ChatGPT sets record for fastest-growing user base - analyst note
.
Reuters
.
February
2
,
2023
.
https://www.reuters.com/technology/chatgpt-sets-record-fastest-growing-user-base-analyst-note-2023-02-01/
(
date last
accessed
30 January 2024
).
4. No authors listed
.
National Cancer Institute: Health Information National Trends Survey 5 Cycle 3
.
National Institutes of Health
.
2019
.
https://hints.cancer.gov/view-questions/question-detail.aspx?PK_Cycle=11&qid=688
(
date last
accessed
30 January 2024
).
5. Shakir
U
.
From ChatGPT to Google Bard: how AI is rewriting the internet
.
The Verge
.
https://www.theverge.com/23610427/chatbots-chatgpt-new-bing-google-bard-conversational-ai
(
date last
accessed
30 January 2024
).
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献