Author:
E Klang,S Portugez,R Gross,R Kassif Lerner,A Brenner,M Gilboa,T Ortal,S Ron,V Robinzon,H Meiri,G Segal
Abstract
Abstract
Background
The task of writing multiple choice question examinations for medical students is complex, timely and requires significant efforts from clinical staff and faculty. Applying artificial intelligence algorithms in this field of medical education may be advisable.
Methods
During March to April 2023, we utilized GPT-4, an OpenAI application, to write a 210 multi choice questions-MCQs examination based on an existing exam template and thoroughly investigated the output by specialist physicians who were blinded to the source of the questions. Algorithm mistakes and inaccuracies, as identified by specialists were classified as stemming from age, gender or geographical insensitivities.
Results
After inputting a detailed prompt, GPT-4 produced the test rapidly and effectively. Only 1 question (0.5%) was defined as false; 15% of questions necessitated revisions. Errors in the AI-generated questions included: the use of outdated or inaccurate terminology, age-sensitive inaccuracies, gender-sensitive inaccuracies, and geographically sensitive inaccuracies. Questions that were disqualified due to flawed methodology basis included elimination-based questions and questions that did not include elements of integrating knowledge with clinical reasoning.
Conclusion
GPT-4 can be used as an adjunctive tool in creating multi-choice question medical examinations yet rigorous inspection by specialist physicians remains pivotal.
Publisher
Springer Science and Business Media LLC
Subject
Education,General Medicine
Reference21 articles.
1. Summit I of M (US) C on the HPE, Greiner AC, Knebel E. Challenges Facing the Health System and Implications for Educational Reform. 2003 Cited 2023 Apr 12; Available from: https://www.ncbi.nlm.nih.gov/books/NBK221522/
2. Ryan MS, Holmboe ES, Chandra S. Competency-Based Medical Education: Considering Its Past, Present, and a Post–COVID-19 Era. Academic Medicine. 2022 Mar 1 Cited 2023;97(3):S90. Available from: /pmc/articles/PMC8855766/
3. Przymuszała P, Piotrowska K, Lipski D, Marciniak R, Cerbin-Koczorowska M. Guidelines on Writing Multiple Choice Questions: A Well-Received and Effective Faculty Development Intervention. Sage Open. 2020 Jul 1 Cited 2023 Apr 12;10(3). Available from: https://doi.org/10.1177/2158244020947432
4. Reyna J. Writing Effective Multiple-Choice Questions in Medical Education. The Royal Australian and New Zealand College of Ophthalmologists – RANZCO (AUSTRALIA); January 2023.
5. Else H. Abstracts written by ChatGPT fool scientists. Nature. 2023;613(7944):423.
Cited by
13 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献