Abstract
AbstractInterpretation of images and spatial relationships is essential in medicine, but the evidence base on how to assess these skills is sparse. Thirty medical students were randomized into two groups (A and B), and invited to “think aloud” while completing 14 histology MCQs. All students answered six identical MCQs, three with only text and three requiring image interpretation. Students then answered eight “matched” questions, where a text-only MCQ on version A was “matched” with an image-based MCQ on paper B, or vice versa. Students’ verbalizations were coded with a realist, inductive approach and emerging codes were identified and integrated within overarching themes. High-performing students were more likely to self-generate an answer as compared to middle and lower performing students, who verbalized more option elimination. Images had no consistent influence on item statistics, and students’ self-identified visual-verbal preference (“learning style”) had no consistent influence on their results for text or image-based questions. Students’ verbalizations regarding images depended on whether interpretation of the adjacent image was necessary to answer the question or not. Specific comments about the image were present in 95% of student-item verbalizations (142 of 150) if interpreting the image was essential to answering the question, whereas few students referred to images if they were an unnecessary addition to the vignette. In conclusion, while assessing image interpretation is necessary for authenticity and constructive alignment, MCQs should be constructed to only include information and images relevant to answering the question, and avoid adding unnecessary information or images that may increase extraneous cognitive load.
Funder
Irish Network of Healthcare Educators
Royal College of Surgeons in Ireland
Publisher
Springer Science and Business Media LLC