Abstract
Abstract
Reaction time (RT) data are often pre-processed before analysis by rejecting outliers and errors and aggregating the data. In stimulus–response compatibility paradigms such as the approach–avoidance task (AAT), researchers often decide how to pre-process the data without an empirical basis, leading to the use of methods that may harm data quality. To provide this empirical basis, we investigated how different pre-processing methods affect the reliability and validity of the AAT. Our literature review revealed 108 unique pre-processing pipelines among 163 examined studies. Using empirical datasets, we found that validity and reliability were negatively affected by retaining error trials, by replacing error RTs with the mean RT plus a penalty, and by retaining outliers. In the relevant-feature AAT, bias scores were more reliable and valid if computed with D-scores; medians were less reliable and more unpredictable, while means were also less valid. Simulations revealed bias scores were likely to be less accurate if computed by contrasting a single aggregate of all compatible conditions with that of all incompatible conditions, rather than by contrasting separate averages per condition. We also found that multilevel model random effects were less reliable, valid, and stable, arguing against their use as bias scores. We call upon the field to drop these suboptimal practices to improve the psychometric properties of the AAT. We also call for similar investigations in related RT-based bias measures such as the implicit association task, as their commonly accepted pre-processing practices involve many of the aforementioned discouraged methods.
Highlights
• Rejecting RTs deviating more than 2 or 3 SD from the mean gives more reliable and valid results than other outlier rejection methods in empirical data
• Removing error trials gives more reliable and valid results than retaining them or replacing them with the block mean and an added penalty
• Double-difference scores are more reliable than compatibility scores under most circumstances
• More reliable and valid results are obtained both in simulated and real data by using double-difference D-scores, which are obtained by dividing a participant’s double mean difference score by the SD of their RTs
Funder
Paris Lodron University of Salzburg
Publisher
Springer Science and Business Media LLC
Subject
General Psychology,Psychology (miscellaneous),Arts and Humanities (miscellaneous),Developmental and Educational Psychology,Experimental and Cognitive Psychology
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献