Using a novel multiple-source indicator to investigate the effect of scale format on careless and insufficient effort responding in a large-scale survey experiment
-
Published:2024-06-10
Issue:1
Volume:12
Page:
-
ISSN:2196-0739
-
Container-title:Large-scale Assessments in Education
-
language:en
-
Short-container-title:Large-scale Assess Educ
Author:
Ulitzsch EstherORCID, Buchholz Janine, Shin Hyo Jeong, Bertling Jonas, Lüdtke Oliver
Abstract
AbstractCommon indicator-based approaches to identifying careless and insufficient effort responding (C/IER) in survey data scan response vectors or timing data for aberrances, such as patterns signaling straight lining, multivariate outliers, or signals that respondents rushed through the administered items. Each of these approaches is susceptible to unique types of misidentifications. We developed a C/IER indicator that requires agreement on C/IER identification from multiple behavioral sources, thereby alleviating the effect of each source’s standalone C/IER misidentifications and increasing the robustness of C/IER identification. To this end, we combined a response-pattern-based multiple-hurdle approach with a recently developed screen-time-based mixture decomposition approach. In an application of the proposed multiple-source indicator to PISA 2022 field trial data we (a) showcase how the indicator hedges against (presumed) C/IER overidentification of its constituting components, (b) replicate associations with commonly reported external correlates of C/IER, namely agreement with self-reported effort and C/IER position effects, and (c) employ the indicator to study the effects of changes of scale characteristics on C/IER occurrence. To this end, we leverage a large-scale survey experiment implemented in the PISA 2022 field trial and investigate the effects of using frequency instead of agreement scales as well as approximate instead of abstract frequency scale labels. We conclude that neither scale format manipulation has the potential to curb C/IER occurrence.
Funder
Research Council of Norway
Publisher
Springer Science and Business Media LLC
Reference69 articles.
1. Arias, V. B., Garrido, L., Jenaro, C., Martinez-Molina, A., & Arias, B. (2020). A little garbage in, lots of garbage out: Assessing the impact of careless responding in personality survey data. Behavior Research Methods, 52, 2489–2505. https://doi.org/10.3758/s13428-020-01401-8 2. Baer, R. A., Ballenger, J., Berry, D. T., & Wetter, M. W. (1997). Detection of random responding on the MMPI-A. Journal of personality assessment, 68(1), 139–151. https://doi.org/10.1207/s15327752jpa6801_11 3. Berry, D. T., Wetter, M. W., Baer, R. A., Larsen, L., Clark, C., & Monroe, K. (1992). MMPI-2 random responding indices: Validation using a self-report methodology. Psychological Assessment, 4(3), 340. https://doi.org/10.1037/1040-3590.4.3.340 4. Böckenholt, U. (2017). Measuring response styles in likert items. Psychological Methods, 22(1), 69–83. https://doi.org/10.1037/met0000106 5. Boe, E.E., May, H., & Boruch, R.F. (2002). Student task persistence in the third international mathematics and science study: A major source of acheievement differences at the national, classroom, and student levels. (tech. rep. No., CRESP-RR-2002-TIMSS1). Pennsylvania Univ., Philadelphia. Center for Research and Evaluation in Social Policy.
|
|