Multiple-Choice Questions in Small Animal Medicine: An Analysis of Cognitive Level and Structural Reliability, and the Impact of these Characteristics on Student Performance

Author:

Cook Audrey K.1,Lidbury Jonathan A.1,Creevy Kate E.1,Heseltine Johanna C.1,Marsilio Sina2,Catchpole Brian3,Whittlestone Kim D.4

Affiliation:

1. Department of Small Animal Clinical Sciences, College of Veterinary Medicine and Biomedical Sciences

2. Department of Medicine and Epidemiology, School of Veterinary Medicine, University of California

3. Department of Pathobiology and Population Sciences, Royal Veterinary College

4. Royal Veterinary College

Abstract

Students entering the final year of the veterinary curriculum need to integrate information and problem solve. Assessments used to document competency prior to entry to the clinical environment should ideally provide a reliable measurement of these essential skills. In this study, five internal medicine specialists evaluated the cognitive grade (CG) and structural integrity of 100 multiple-choice questions (MCQs) used to assess learning by third-year students at a United States (US) veterinary school. Questions in CG 1 tested factual recall and simple understanding; those in CG 2 required interpretation and analysis; CG 3 MCQs tested problem solving. The majority (53%) of questions could be answered correctly using only recall or simple understanding (CG 1); 12% of MCQs required problem solving (CG 3). Less than half of the questions (43%) were structurally sound. Overall student performance for the 3 CGs differed significantly (92% for CG 1 vs. 84% for CG 3; p = .03. Structural integrity did not appear to impact overall performance, with a median pass rate of 90% for flawless questions versus 86% for those with poor structural integrity ( p = .314). There was a moderate positive correlation between individual student outcomes for flawless CG 1 versus CG 3 questions ( rs = 0.471; p = < .001), although 13% of students failed to achieve an aggregate passing score (65%) on the CG 3 questions. These findings suggest that MCQ-based assessments may not adequately evaluate intended learning outcomes and that instructors may benefit from guidance and training for this issue.

Publisher

University of Toronto Press Inc. (UTPress)

Subject

General Veterinary,Education,General Medicine

Cited by 2 articles. 订阅此论文施引文献 订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3