A mixed methods evaluation of the effect of confidence-based versus conventional multiple-choice questions on student performance and the learning journey

Author:

Chong Luke X1,Hockley Nick1,Wood-Bradley Ryan J1,Armitage James A1

Affiliation:

1. Deakin University

Abstract

Abstract Background Traditional single best answer multiple-choice questions (MCQs) are a proven and ubiquitous assessment tool. By their very nature, MCQs prompt students to guess a correct outcome when unsure of the answer, which may lead to a reduced ability to reliably assay student knowledge. Moreover, the traditional Single Best Answer Test (SBAT) offers binary feedback (correct or incorrect) and therefore offers no feedback or enhancement of the student learning journey. Confidence-based Answer Tests (CBATs) are designed to improve reliability because participants are not forced to guess where they cannot choose between two or more alternative answers which they may favour equally. CBATs enable students to reflect on their knowledge and better appreciate where their mastery of a particular subject may be weaker. Although CBATs can provide richer feedback to students and improve the learning journey, their use may be limited if they significantly alter student scores or grades, which may be viewed negatively. The aim of this study was to compare performance across these test paradigms, to investigate if there are any systematic biases present. Methods Thirty-four first-year optometry students and 10 lecturers undertook a test comprising 40 questions. Each question was completed using two specified test paradigms; for the first paradigm, they were allowed to weight their answers based on confidence (CBAT), and a single best answer (SBAT). Upon test completion, students undertook a survey comprising both Likert scale and open-ended responses regarding their experience and perspectives on the CBAT and SBAT multiple-choice test paradigms. These were analysed thematically. Results There was no significant difference between paradigms, with a median difference of 1.25% (p = 0.313, Kruskal-Wallis) in students and 3.33% (p = 0.437, Kruskal-Wallis) in staff. The survey indicated that students had no strong preference towards a particular method. Conclusions Since there was no significant difference between test paradigms, this validates implementation of the confidence-based paradigm as an equivalent and viable option for traditional MCQs but with the added potential benefit that, if coupled with reflective practice, can provide students with a richer learning experience. There is no inherent bias within one method over another.

Publisher

Research Square Platform LLC

Reference28 articles.

1. Ensuring the quality of multiple-choice exams administered to small cohorts: A cautionary tale;Young M;Perspect Med Educ,2017

2. Modernizing optometric education in Australia: Ideas from medical education;Weisinger HS;Optom Educ,2011

3. Characteristics of the informal curriculum and trainees' ethical choices;Hundert EM;Acad Med,1996

4. The correction for guessing;Diamond J;Rev Educ Res,1973

5. Identifying threshold concepts and proposing strategies to support doctoral candidates;Kiley M;Innovations Educ Teach Int,2009

同舟云学术

1.学者识别学者识别

2.学术分析学术分析

3.人才评估人才评估

"同舟云学术"是以全球学者为主线,采集、加工和组织学术论文而形成的新型学术文献查询和分析系统,可以对全球学者进行文献检索和人才价值评估。用户可以通过关注某些学科领域的顶尖人物而持续追踪该领域的学科进展和研究前沿。经过近期的数据扩容,当前同舟云学术共收录了国内外主流学术期刊6万余种,收集的期刊论文及会议论文总量共计约1.5亿篇,并以每天添加12000余篇中外论文的速度递增。我们也可以为用户提供个性化、定制化的学者数据。欢迎来电咨询!咨询电话:010-8811{复制后删除}0370

www.globalauthorid.com

TOP

Copyright © 2019-2024 北京同舟云网络信息技术有限公司
京公网安备11010802033243号  京ICP备18003416号-3