Author:
Rosoł Maciej,Gąsior Jakub S.,Łaba Jonasz,Korzeniewski Kacper,Młyńczak Marcel
Abstract
AbstractThe study aimed to evaluate the performance of two Large Language Models (LLMs): ChatGPT (based on GPT-3.5) and GPT-4 with two temperature parameter values, on the Polish Medical Final Examination (MFE). The models were tested on three editions of the MFE from: Spring 2022, Autumn 2022, and Spring 2023 in two language versions—English and Polish. The accuracies of both models were compared and the relationships between the correctness of answers with the answer’s metrics were investigated. The study demonstrated that GPT-4 outperformed GPT-3.5 in all three examinations regardless of the language used. GPT-4 achieved mean accuracies of 79.7% for both Polish and English versions, passing all MFE versions. GPT-3.5 had mean accuracies of 54.8% for Polish and 60.3% for English, passing none and 2 of 3 Polish versions for temperature parameter equal to 0 and 1 respectively while passing all English versions regardless of the temperature parameter value. GPT-4 score was mostly lower than the average score of a medical student. There was a statistically significant correlation between the correctness of the answers and the index of difficulty for both models. The overall accuracy of both models was still suboptimal and worse than the average for medical students. This emphasizes the need for further improvements in LLMs before they can be reliably deployed in medical settings. These findings suggest an increasing potential for the usage of LLMs in terms of medical education.
Publisher
Springer Science and Business Media LLC
Reference49 articles.
1. Montejo-Ráez, A. & Jiménez-Zafra, S. M. Current approaches and applications in natural language processing. Appl. Sci. https://doi.org/10.3390/app12104859 (2022).
2. Mars, M. From word embeddings to pre-trained language models: A state-of-the-art walkthrough. Appl. Sci. https://doi.org/10.3390/app12178805 (2022).
3. Lee, P., Bubeck, S. & Petro, J. Benefits, limits, and risks of GPT-4 as an AI chatbot for medicine. N. Engl. J. Med. 388, 1233–1239 (2023).
4. Schulman, J., Wolski, F., Dhariwal, P., Radford, A. & Klimov, O. Proximal policy optimization algorithms. CoRR. http://arxiv.org/abs/1707.06347 (2017).
5. Hendrycks, D. et al. Measuring massive multitask language understanding. CoRR http://arxiv.org/abs/2009.03300 (2020).
Cited by
40 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献