ChatGPT 4 Versus ChatGPT 3.5 on The Final FRCR Part A Sample Questions. Assessing Performance and Accuracy of Explanations

Author:

Ghosn Youssef,Sardouk Omar El,Jabbour Yara,Jrad Manal,Kamareddine Mohammed Hussein,Abbas Nada,Saade Charbel,Ghanem Alain Abi

Abstract

AbstractObjectiveTo evaluate the performance of two versions of ChatGPT, GPT4 and GPT3.5, on the Final FRCR (Part A) also referred to as FRCR Part 2A radiology exam. The primary objective is to assess whether these large language models (LLMs) can effectively answer radiology test questions while providing accurate explanations for the answers.MethodsThe evaluation involves a total of 281 multiple choice questions, combining the 41 FRCR sample questions found on The Royal Collage of Radiologists website and 240 questions from a supplementary test bank. Both GPT4 and GPT3.5 were given the 281 questions with the answer choices, and their responses were assessed for correctness and accuracy of the explanations provided. The 41 FRCR sample questions difficulty was ranked into “low order” and “high order” questions. A significance level of p<0.05 was used.ResultsGPT4 demonstrated significant improvement over GPT3.5 in answering the 281 questions, achieving 76.5% correct answers compared to 52.7%, respectively (p<0.001). GPT4 demonstrated significant improvement over GPT3.5 in providing accurate explanations for the 41 FRCR sample questions, with an accuracy of 65.9% and 31.7% respectively (p=0.002). The difficulty of the question did not significantly affect the models’ performances.ConclusionThe findings of this study demonstrate a significant improvement in the performance of GPT4 compared to GPT3.5 on FRCR style examination. However, the accuracy of the provided explanations might limit the models’ reliability as learning tools.Advances in KnowledgeThe study indirectly explores the potential of LLMs to contribute to the diagnostic accuracy and efficiency of medical imaging while raising questions about the current LLMs limitations in providing reliable explanations for radiology related questions hindering its uses for learning and in clinical practice.HighlightsChatGPT4 passed an FRCR part 2A style exam while ChatGPT3.5 did not.ChatGPT4 showed significantly higher correctness of answers and accuracy of explanations.No significant difference in performance was observed between “high order” and “lower order” questions.Explanation accuracy was lower than correct answers rate limiting the Models’ reliability aslearning tools.

Publisher

Cold Spring Harbor Laboratory

Reference10 articles.

1. Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models;PLOS Digit Heal,2023

2. Bhayana R , Krishna S , Bleakney RR. Performance of ChatGPT on a Radiology Boardstyle Examination: Insights into Current Strengths and Limitations. Radiology. 2023;307(5).

3. The Royal Collage of Radiologists. Final FRCR Part A – Guidance for Candidates. :1–32. Available from: https://www.rcr.ac.uk/sites/default/files/how_to_approach_cr2a_candidate_guidance.pdf

4. The Royal Collage of Radiologists. Final FRCR Part B Examination - Purpose of Assessment Statement. Available from: https://www.rcr.ac.uk/sites/default/files/cr2b_purpose_of_assessment_statement.pdf

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3