Abstract
AbstractIn van Doorn et al. (2021), we outlined a series of open questions concerning Bayes factors for mixed effects model comparison, with an emphasis on the impact of aggregation, the effect of measurement error, the choice of prior distributions, and the detection of interactions. Seven expert commentaries (partially) addressed these initial questions. Surprisingly perhaps, the experts disagreed (often strongly) on what is best practice—a testament to the intricacy of conducting a mixed effect model comparison. Here, we provide our perspective on these comments and highlight topics that warrant further discussion. In general, we agree with many of the commentaries that in order to take full advantage of Bayesian mixed model comparison, it is important to be aware of the specific assumptions that underlie the to-be-compared models.
Funder
NWO
European Research Council
Publisher
Springer Science and Business Media LLC
Subject
Developmental and Educational Psychology,Neuropsychology and Physiological Psychology
Reference48 articles.
1. Anscombe, F.J. (1973). Graphs in statistical analysis. The American Statistician, 27, 17–21.
2. Aust, F., van Doorn, J., & Haaf, J.M. (2022). Translating default priors from linear mixed models to repeated-measures ANOVA and paired t-tests. Manuscript in preparation.
3. Barr, D.J., Levy, R., Scheepers, C., & Tily, H.J. (2013). Random effects structure for confirmatory hypothesis testing: Keep it maximal. Journal of Memory and Language, 68, 255–278.
4. Browne, M. (2000). Cross-validation methods. Journal of Mathematical Psychology, 44, 108–132.
5. Cohen, J. (1988). Statistical power analysis for the behavioral sciences. Routledge.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献