How Many Participants? How Many Trials? Maximizing the Power of Reaction Time Studies
-
Published:2023-08-03
Issue:
Volume:
Page:
-
ISSN:1554-3528
-
Container-title:Behavior Research Methods
-
language:en
-
Short-container-title:Behav Res
Abstract
AbstractDue to limitations in the resources available for carrying out reaction time (RT) experiments, researchers often have to choose between testing relatively few participants with relatively many trials each or testing relatively many participants with relatively few trials each. To compare the experimental power that would be obtained under each of these options, I simulated virtual experiments using subsets of participants and trials from eight large real RT datasets examining 19 experimental effects. The simulations compared designs using the first $$N_T$$
N
T
trials from $$N_P$$
N
P
randomly selected participants, holding constant the total number of trials across all participants, $$N_P \! \times \! N_T$$
N
P
×
N
T
. The $$[N_P,N_T]$$
[
N
P
,
N
T
]
combination maximizing the power to detect each effect depended on how the mean and variability of that effect changed with practice. For most effects, power was greater in designs having many participants with few trials each rather than the reverse, suggesting that researchers should usually try to recruit large numbers of participants for short experimental sessions. In some cases, power for a fixed total number of trials across all participants was maximized by having as few as two trials per participant in each condition. Where researchers can make plausible predictions about how their effects will change over the course of a session, they can use those predictions to increase their experimental power.
Funder
University of Otago
Publisher
Springer Science and Business Media LLC
Subject
General Psychology,Psychology (miscellaneous),Arts and Humanities (miscellaneous),Developmental and Educational Psychology,Experimental and Cognitive Psychology
Reference26 articles.
1. Adelman, J. S., Johnson, R. L., McCormick, S. F., McKague, M., Kinoshita, S., Bowers, J. S., Perry, J. R., Lupker, S. J., Forster, K. I., Cortese, M. J., Scaltritti, M., Aschenbrenner, A. J., Coane, J. H., White, L., Yap, M. J., Davis, C., Kim, J., & Davis, C. J. (2014). A behavioral database for masked form priming. Behavior Research Methods, 46(4), 1052–1067. https://doi.org/10.3758/s13428-013-0442-y 2. Baker, D. H., Vilidaite, G., Lygo, F. A., Smith, A. K., Flack, T. R., Gouws, A. D., & Andrews, T. J. (2021). Power contours: Optimising sample size and precision in experimental psychology and human neuroscience. Psychological Methods, 26(3), 295–314. https://doi.org/10.1037/met0000337 3. Bazilinskyy, P., & De Winter, J. (2018). Crowdsourced measurement of reaction times to audiovisual stimuli with various degrees of asynchrony. Human Factors, 60(8), 1192–1206. https://doi.org/10.1177/0018720818787126 4. Brysbaert, M., & Stevens, M. (2018). Power analysis and effect size in mixed effects models: A tutorial. Journal of Cognition, 1(1). https://doi.org/10.5334/joc.10 5. Button, K. S., & Munafò, M. R. (2017). Powering reproducible research. In S. O. Lilienfeld, & I. D. Waldman (Eds.), Psychological science under scrutiny: Recent challenges and proposed remedies. (pp. 22–33). New York, NY: Wiley. https://doi.org/10.1002/9781119095910.ch2
Cited by
4 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|