Abstract
AbstractThe surge in interest in individual differences has coincided with the latest replication crisis centered around brain-wide association studies of brain-behavior correlations. Yet the reliability of the measures we use in cognitive neuroscience, a crucial component of this brain-behavior relationship, is often assumed but not directly tested. Here, we evaluate the reliability of different cognitive tasks on a large dataset of over 250 participants, who each completed a multi-day task battery. We show how reliability improves as a function of number of trials, and describe the convergence of the reliability curves for the different tasks, allowing us to score tasks according to their suitability for studies of individual differences. To improve the accessibility of these findings, we designed a simple web-based tool that implements this function to calculate the convergence factor and predict the expected reliability for any given number of trials and participants, even based on limited pilot data.
Publisher
Cold Spring Harbor Laboratory