Affiliation:
1. Institute of Education, School of Inclusive and Special Education Dublin City University Dublin Ireland
2. Institute of Education, School of Human Development Dublin City University Dublin Ireland
3. Centre for Assessment Research, Policy and Practice in Education (CARPE) Dublin City University Dublin Ireland
Abstract
AbstractThe use of animations and images in technology‐based assessments (TBAs) represents a significant change in assessment design. To ensure that appropriate inferences can be drawn from assessments that use multimedia stimuli, their impact on test‐taker performance and behaviour must be investigated. To achieve this, an experiment was conducted with 251 Irish post‐primary students using an animated and text‐image version of the same TBA of scientific literacy. Eye movement (n = 33) and interview data (n = 12) were also collected as a measure of test‐taker attentional behaviour. Overall, there was no significant difference in test‐taker performance when identical items used animated or text‐image stimuli. However, items with dynamic stimuli often had higher discrimination indices indicating that these items were better at distinguishing between those with differing levels of knowledge. Eye movement data also revealed that dynamic item stimuli encouraged longer average fixation durations on the response area of an item. These findings indicate that multimedia stimuli may potentially affect how test‐takers interact with online assessments. This has implications for what claims can be made about a learner's performance on an assessment. Recommendations for policy, practice and future research are considered.
Practitioner notesWhat is already known about this topic:
The use of multimedia stimuli in the form of diagrams, high‐resolutions images, animations, and simulations are becoming more commonplace in technology‐based assessments (TBAs) for post‐primary aged learners.
It is unclear what impact the use of multimedia stimuli can have on an individual's performance and behaviour in assessment/testing contexts.
Eye movement data can be used to support our understanding of test‐takers' interactions with TBAs.
What this paper adds:
By comparing the use of different types of multimedia stimuli (animations vs images), this study has responded to calls for a more in‐depth examination of test items involving multimedia stimuli for TBAs.
While there was no difference in test‐taker performance between those who saw test items with different forms of multimedia stimuli, key differences in attentional behaviour were noted. Test‐takers interacted with the assessment differently pending the multimedia stimuli used.
The current study showed that dynamic stimuli may be a way to improve item discrimination, which is generally desirable in assessments.
Implications for practice and/or policy:
Certain design features appeared to add additional ‘assessment load’. This information could be leveraged to improve test item design and test specifications. It may also encourage test‐developers to reconsider what claims they make about individuals who complete assessments with these features.
Eye‐tracking technology has huge potential to support research in online assessment environments.