Affiliation:
1. Drexel University College of Medicine
Abstract
Abstract
This study evaluates the proficiency of ChatGPT-4 across various medical specialties and assesses its potential as a study tool for medical students preparing for the United States Medical Licensing Examination (USMLE) Step 2 and related clinical subject exams. ChatGPT-4 answered board-level questions with 89% accuracy, but showcased significant discrepancies in performance across specialties. Although it excelled in psychiatry, neurology, and obstetrics & gynecology, it underperformed in pediatrics, emergency medicine, and family medicine. These variations may be potentially attributed to the depth and recency of training data as well as the scope of the specialties assessed. Specialties with significant interdisciplinary overlap had lower performance, suggesting complex clinical scenarios pose a challenge to the AI. In terms of the future, the overall efficacy of ChatGPT-4 indicates a promising supplemental role in medical education, but performance inconsistencies across specialties in the current version lead us to recommend that medical students use AI with caution.
Publisher
Research Square Platform LLC
Reference13 articles.
1. ChatGPT - Reshaping medical education and clinical management;Khan RA;Pakistan Journal of Medical Sciences,2023
2. ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope;Ray PP;Internet of Things and Cyber-Physical Systems,2023
3. ChatGPT in Medical Education and Research: A Boon or a Bane?;Jeyaraman M;Cureus,2023
4. ChatGPT in medical education: a paradigm shift or a dangerous tool?;Grabb D;Academic Psychiatry,2023
5. The rise of ChatGPT: Exploring its potential in medical education;Lee H;Anatomical Sciences Education,2023
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献