Author:
Delsoz Mohammad,Madadi Yeganeh,Munir Wuqaas M,Tamm Brendan,Mehravaran Shiva,Soleimani Mohammad,Djalilian Ali,Yousefi Siamak
Abstract
ABSTRACTIntroductionAssessing the capabilities of ChatGPT-4.0 and ChatGPT-3.5 for diagnosing corneal eye diseases based on case reports and compare with human experts.MethodsWe randomly selected 20 cases of corneal diseases including corneal infections, dystrophies, degenerations, and injuries from a publicly accessible online database from the University of Iowa. We then input the text of each case description into ChatGPT-4.0 and ChatGPT3.5 and asked for a provisional diagnosis. We finally evaluated the responses based on the correct diagnoses then compared with the diagnoses of three cornea specialists (Human experts) and evaluated interobserver agreements.ResultsThe provisional diagnosis accuracy based on ChatGPT-4.0 was 85% (17 correct out of 20 cases) while the accuracy of ChatGPT-3.5 was 60% (12 correct cases out of 20). The accuracy of three cornea specialists were 100% (20 cases), 90% (18 cases), and 90% (18 cases), respectively. The interobserver agreement between ChatGPT-4.0 and ChatGPT-3.5 was 65% (13 cases) while the interobserver agreement between ChatGPT-4.0 and three cornea specialists were 85% (17 cases), 80% (16 cases), and 75% (15 cases), respectively. However, the interobserver agreement between ChatGPT-3.5 and each of three cornea specialists was 60% (12 cases).ConclusionsThe accuracy of ChatGPT-4.0 in diagnosing patients with various corneal conditions was markedly improved than ChatGPT-3.5 and promising for potential clinical integration.Key summary points-The aim of this work was to evaluate the performance of ChatGPT-4 and ChatGPT-3.5 for providing the provisional diagnosis of different corneal eye diseases based on case descriptions and compared them with three cornea specialists.-The accuracy of ChatGPT-4.0 in diagnosing patients with various corneal conditions was significantly better than ChatGPT-3.5 based on the specific cases.-The interobserver agreement between ChatGPT-4.0 and ChatGPT-3.5 was 65% while the interobserver agreement between ChatGPT-4.0 and three cornea specialists were 85%, 80%, and 75%, respectively.
Publisher
Cold Spring Harbor Laboratory
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献