Large Language Models for Intraoperative Decision Support in Plastic Surgery: A Comparison between ChatGPT-4 and Gemini
Author:
Gomez-Cabello Cesar A.1ORCID, Borna Sahar1ORCID, Pressman Sophia M.1ORCID, Haider Syed Ali1, Forte Antonio J.12ORCID
Affiliation:
1. Division of Plastic Surgery, Mayo Clinic, 4500 San Pablo Rd S, Jacksonville, FL 32224, USA 2. Center for Digital Health, Mayo Clinic, 200 First St. SW, Rochester, MN 55905, USA
Abstract
Background and Objectives: Large language models (LLMs) are emerging as valuable tools in plastic surgery, potentially reducing surgeons’ cognitive loads and improving patients’ outcomes. This study aimed to assess and compare the current state of the two most common and readily available LLMs, Open AI’s ChatGPT-4 and Google’s Gemini Pro (1.0 Pro), in providing intraoperative decision support in plastic and reconstructive surgery procedures. Materials and Methods: We presented each LLM with 32 independent intraoperative scenarios spanning 5 procedures. We utilized a 5-point and a 3-point Likert scale for medical accuracy and relevance, respectively. We determined the readability of the responses using the Flesch–Kincaid Grade Level (FKGL) and Flesch Reading Ease (FRE) score. Additionally, we measured the models’ response time. We compared the performance using the Mann–Whitney U test and Student’s t-test. Results: ChatGPT-4 significantly outperformed Gemini in providing accurate (3.59 ± 0.84 vs. 3.13 ± 0.83, p-value = 0.022) and relevant (2.28 ± 0.77 vs. 1.88 ± 0.83, p-value = 0.032) responses. Alternatively, Gemini provided more concise and readable responses, with an average FKGL (12.80 ± 1.56) significantly lower than ChatGPT-4′s (15.00 ± 1.89) (p < 0.0001). However, there was no difference in the FRE scores (p = 0.174). Moreover, Gemini’s average response time was significantly faster (8.15 ± 1.42 s) than ChatGPT’-4′s (13.70 ± 2.87 s) (p < 0.0001). Conclusions: Although ChatGPT-4 provided more accurate and relevant responses, both models demonstrated potential as intraoperative tools. Nevertheless, their performance inconsistency across the different procedures underscores the need for further training and optimization to ensure their reliability as intraoperative decision-support tools.
Reference40 articles.
1. Hadi, M.U., Al-Tashi, Q., Qureshi, R., Shah, A., Muneer, A., Irfan, M., Zafar, A., Shaikh, M.B., Akhtar, N., and Al-Garadi, M.A. (2023). Large Language Models: A Comprehensive Survey of Applications, Challenges, Limitations, and Future Prospects. Authorea Prepr. 2. Leveraging Large Language Models (LLM) for the Plastic Surgery Resident Training: Do They Have a Role?;Mohapatra;Indian J. Plast. Surg.,2023 3. Johnson, D., Goodman, R., Patrinely, J., Stone, C., Zimmerman, E., Donald, R., Chang, S., Berkowitz, S., Finn, A., and Jahangir, E. (2023). Assessing the Accuracy and Reliability of AI-Generated Medical Responses: An Evaluation of the Chat-GPT Model. Res Sq. 4. Artificial Intelligence-enabled Decision Support in Surgery: State-of-the-art and Future Directions;Loftus;Ann. Surg.,2023 5. Current applications of artificial intelligence for intraoperative decision support in surgery;Hashimoto;Front. Med.,2020
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|