Author:
Cooper Alex,Vehtari Aki,Forbes Catherine,Simpson Dan,Kennedy Lauren
Abstract
AbstractBrute force cross-validation (CV) is a method for predictive assessment and model selection that is general and applicable to a wide range of Bayesian models. Naive or ‘brute force’ CV approaches are often too computationally costly for interactive modeling workflows, especially when inference relies on Markov chain Monte Carlo (MCMC). We propose overcoming this limitation using massively parallel MCMC. Using accelerator hardware such as graphics processor units, our approach can be about as fast (in wall clock time) as a single full-data model fit. Parallel CV is flexible because it can easily exploit a wide range data partitioning schemes, such as those designed for non-exchangeable data. It can also accommodate a range of scoring rules. We propose MCMC diagnostics, including a summary of MCMC mixing based on the popular potential scale reduction factor ($$\widehat{\textrm{R}}$$
R
^
) and MCMC effective sample size ($$\widehat{\textrm{ESS}}$$
ESS
^
) measures. We also describe a method for determining whether an $$\widehat{\textrm{R}}$$
R
^
diagnostic indicates approximate stationarity of the chains, that may be of more general interest for applications beyond parallel CV. Finally, we show that parallel CV and its diagnostics can be implemented with online algorithms, allowing parallel CV to scale up to very large blocking designs on memory-constrained computing accelerators.
Publisher
Springer Science and Business Media LLC
Reference46 articles.
1. Chen, T., Fox, E., Guestrin, C.: Stochastic gradient Hamiltonian Monte Carlo. In: International conference on machine learning, PMLR, pp 1683–1691 (2014)
2. Cooper, A., Simpson, D., Kennedy, L., et al.: Cross-validatory model selection for Bayesian autoregressions with exogenous regressors Bayesian Anal. Advance Publication (2024). https://doi.org/10.1214/23-BA1409
3. Cowles, M.K., Roberts, G.O., Rosenthal, J.S.: Possible biases induced by MCMC convergence diagnostics. J. Stat. Comput. Simul. 64(1), 87–104 (1999). https://doi.org/10.1080/00949659908811968
4. Dawid, A.P., Musio, M.: Theory and applications of proper scoring rules. METRON 72(2), 169–183 (2014). https://doi.org/10.1007/s40300-014-0039-y
5. Dawid, A.P., Musio, M.: Bayesian model selection based on proper scoring rules. Bayesian Anal. 10(2), 479–499 (2015). https://doi.org/10.1214/15-BA942