Affiliation:
1. Department of Orthopaedic Surgery, Emory University, Atlanta, GA, USA
2. Division of Hand Surgery, Department of Orthopaedic Surgery, Emory University, Atlanta, GA, USA
Abstract
Background: In recent years, ChatGPT has become a popular source of information online. Physicians need to be aware of the resources their patients are using to self-inform of their conditions. This study investigates physician-graded accuracy and completeness of ChatGPT regarding various questions patients are likely to ask the artificial intelligence (AI) system concerning common upper limb orthopedic conditions. Methods: ChatGPT 3.5 was interrogated concerning 5 common orthopedic hand conditions: carpal tunnel syndrome, Dupuytren contracture, De Quervain tenosynovitis, trigger finger, and carpal metacarpal arthritis. Questions evaluated conditions’ symptoms, pathology, management, surgical indications, recovery time, insurance coverage, and workers’ compensation possibility. Each topic had 12 to 15 questions and was established as its own ChatGPT conversation. All questions regarding the same diagnosis were presented to the AI, and its answers were recorded. Each question was then graded for both accuracy (Likert scale of 1-6) and completeness (Likert scale of 1-3) by 10 fellowship trained hand surgeons. Descriptive statistics were performed. Results: Overall, the mean accuracy score for ChatGPT’s answers to common orthopedic hand diagnoses was 4.83 out of 6 ± 0.95. The mean completeness of answers was 2 out of 3 ± 0.59. Conclusions: Easily accessible online AI such as ChatGPT is becoming more advanced and thus more reliable in its ability to answer common medical questions. Physicians can anticipate such online resources being mostly correct, however incomplete. Patients should beware of relying on such resources in isolation.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献