Abstract
Abstract
Background
Developed originally as a tool for resident self-evaluation, the Plastic Surgery Inservice Training Examination (PSITE) has become a standardized tool adopted by Plastic Surgery residency programs. The introduction of large language models (LLMs), such as ChatGPT (OpenAI, San Francisco, CA), has demonstrated the potential to help propel the field of Plastic Surgery.
Objectives
The authors of this study wanted to assess whether or not ChatGPT could be utilized as a tool in resident education by assessing its accuracy on the PSITE.
Methods
Questions were obtained from the 2022 PSITE, which was present on the American Council of Academic Plastic Surgeons (ACAPS) website. Questions containing images or tables were carefully inspected and flagged before being inputted into ChatGPT. All responses by ChatGPT were qualified utilizing the properties of natural coherence. Responses that were found to be incorrect were divided into the following categories: logical, informational, or explicit fallacy.
Results
ChatGPT answered a total of 242 questions with an accuracy of 54.96%. The software incorporated logical reasoning in 88.8% of questions, internal information in 95.5% of questions, and external information in 92.1% of questions. When stratified by correct and incorrect responses, we determined that there was a statistically significant difference in ChatGPT's use of external information (P < .05).
Conclusions
ChatGPT is a versatile tool that has the potential to impact resident education by providing general knowledge, clarifying information, providing case-based learning, and promoting evidence-based medicine. With advancements in LLM and artificial intelligence (AI), it is possible that ChatGPT may be an impactful tool for resident education within Plastic Surgery.
Publisher
Oxford University Press (OUP)
Cited by
50 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献