Affiliation:
1. Division of Urology University of Sao Paulo School of Medicine Sao Paulo Brazil
2. Division of Urology ABC Medical School Sao Paulo Brazil
3. Department of Urology Albert Einstein Jewish Hospital Sao Paulo Brazil
4. Department of Urologic Oncology BP—a Beneficência Portuguesa de São Paulo Sao Paulo Brazil
5. Department of Surgery State University of Feira de Santana Bahia Brazil
6. Innovation and Information Technology Sector AC Camargo Cancer Hospital Sao Paulo Brazil
7. Department of Surgery/Urology Memorial Sloan Kettering Cancer Center New York New York USA
Abstract
AbstractIntroductionArtificial intelligence (AI) shows immense potential in medicine and Chat generative pretrained transformer (ChatGPT) has been used for different purposes in the field. However, it may not match the complexity and nuance of certain medical scenarios. This study evaluates the accuracy of ChatGPT 3.5 and 4 in providing recommendations regarding the management of postprostatectomy urinary incontinence (PPUI), considering The Incontinence After Prostate Treatment: AUA/SUFU Guideline as the best practice benchmark.Materials and MethodsA set of questions based on the AUA/SUFU Guideline was prepared. Queries included 10 conceptual questions and 10 case‐based questions. All questions were open and entered into the ChatGPT with a recommendation to limit the answer to 200 words, for greater objectivity. Responses were graded as correct (1 point); partially correct (0.5 point), or incorrect (0 point). Performances of versions 3.5 and 4 of ChatGPT were analyzed overall and separately for the conceptual and the case‐based questions.ResultsChatGPT 3.5 scored 11.5 out of 20 points (57.5% accuracy), while ChatGPT 4 scored 18 (90.0%; p = 0.031). In the conceptual questions, ChatGPT 3.5 provided accurate answers to six questions along with one partially correct response and three incorrect answers, with a final score of 6.5. In contrast, ChatGPT 4 provided correct answers to eight questions and partially correct answers to two questions, scoring 9.0. In the case‐based questions, ChatGPT 3.5 scored 5.0, while ChatGPT 4 scored 9.0. The domains where ChatGPT performed worst were evaluation, treatment options, surgical complications, and special situations.ConclusionChatGPT 4 demonstrated superior performance compared to ChatGPT 3.5 in providing recommendations for the management of PPUI, using the AUA/SUFU Guideline as a benchmark. Continuous monitoring is essential for evaluating the development and precision of AI‐generated medical information.
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献