Author:
Liu Pin,Chen Shou-Yen,Chang Yu-Che,Ng Chip-Jin,Chaou Chung-Hsien
Abstract
BackgroundIn-training examination (ITE) has been widely adopted as an assessment tool to measure residents' competency. We incorporated different formats of assessments into the emergency medicine (EM) residency training program to form a multimodal, multistation ITE. This study was conducted to examine the cost and effectiveness of its different testing formats.MethodsWe conducted a longitudinal study in a tertiary teaching hospital in Taiwan. Nine EM residents were enrolled and followed for 4 years, and the biannual ITE scores were recorded and analyzed. Each ITE consisted of 8–10 stations and was categorized into four formats: multiple-choice question (MCQ), question and answer (QA), oral examination (OE), and high-fidelity simulation (HFS) formats. The learner satisfaction, validity, reliability, and costs were analyzed.Results486 station scores were recorded during the 4 years. The numbers of MCQ, OE, QA, and HFS stations were 45 (9.26%), 90 (18.5%), 198 (40.7%), and 135 (27.8%), respectively. The overall Cronbach's alpha reached 0.968, indicating good overall internal consistency. The correlation with EM board examination was highest for HFS (ρ = 0.657). The average costs of an MCQ station, an OE station, and an HFS station were ~3, 14, and 21 times that of a QA station.ConclusionsMulti-dimensional assessment contributes to good reliability. HFS correlates best with the final training exam score but is also the most expensive format among ITEs. Increased testing domains with various formats improve ITE's overall reliability. Program directors must understand each test format's strengths and limitations to bring forth the best combination of exams under the local context.
Cited by
1 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献