1. arXiv, arxiv, 1709.01449, stat, Visualization in Bayesian Workflow, Bayesian data analysis is about more than just computing a posterior distribution, and Bayesian visualization is about more than trace plots of Markov chains. Practical Bayesian data analysis, like all data analysis, is an iterative process of model building, inference, model checking and evaluation, and model expansion. Visualization is helpful in each of these stages of the Bayesian workflow and it is indispensable when drawing inferences from the types of modern, high-dimensional models that are used by applied researchers., arXiv:1709.01449 [stat], Gabry, Jonah and Simpson, Daniel and Vehtari, Aki and Betancourt, Michael and Gelman, Andrew, sep, 2017, Statistics - Methodology,Statistics - Applications, /home/osvaldo/Zotero/storage/KJYIRJWV/Gabry et al. - 2017 - Visualization in Bayesian workflow.pdf;/home/osvaldo/Zotero/storage/985JKW2Q/1709.html, Comment: 17 pages, 11 Figures. Includes supplementary material, 9
2. arXiv, arxiv, 1507.04544, Practical Bayesian Model Evaluation Using Leave-One-out Cross-Validation and WAIC, Leave-one-out cross-validation (LOO) and the widely applicable information criterion (WAIC) are methods for estimating pointwise out-of-sample prediction accuracy from a fitted Bayesian model using the log-likelihood evaluated at the posterior simulations of the parameter values. LOO and WAIC have various advantages over simpler estimates of predictive error such as AIC and DIC but are less used in practice because they involve additional computational steps. Here we lay out fast and stable computations for LOO and WAIC that can be performed using existing simulation draws. We introduce an efficient computation of LOO using Pareto-smoothed importance sampling (PSIS), a new procedure for regularizing importance weights. Although WAIC is asymptotically equal to LOO, we demonstrate that PSIS-LOO is more robust in the finite case with weak priors or influential observations. As a byproduct of our calculations, we also obtain approximate standard errors for estimated predictive errors and for comparing of predictive errors between two models. We implement the computations in an R package called ’loo’ and demonstrate using models fit with the Bayesian inference package Stan., arXiv:1507.04544 [stat], Vehtari, Aki and Gelman, Andrew and Gabry, Jonah, jul, 2015, Statistics - Computation,Statistics - Methodology, 7
3. A Widely Applicable Bayesian Information Criterion, 14, arXiv, arxiv, 1208.6338, A statistical model or a learning machine is called regular if the map taking a parameter to a probability distribution is one-to-one and if its Fisher information matrix is always positive definite. If otherwise, it is called singular. In regular statistical models, the Bayes free energy, which is defined by the minus logarithm of Bayes marginal likelihood, can be asymptotically approximated by the Schwarz Bayes information criterion (BIC), whereas in singular models such approximation does not hold. Recently, it was proved that the Bayes free energy of a singular model is asymptotically given by a generalized formula using a birational invariant, the real log canonical threshold (RLCT), instead of half the number of parameters in BIC. Theoretical values of RLCTs in several statistical models are now being discovered based on algebraic geometrical methodology. However, it has been difficult to estimate the Bayes free energy using only training samples, because an RLCT depends on an unknown true distribution. In the present paper, we define a widely applicable Bayesian information criterion (WBIC) by the average log likelihood function over the posterior distribution with the inverse temperature 1/log n, where n is the number of training samples. We mathematically prove that WBIC has the same asymptotic expansion as the Bayes free energy, even if a statistical model is singular for or unrealizable by a statistical model. Since WBIC can be numerically calculated without any information about a true distribution, it is a generalized version of BIC onto singular statistical models., Journal of Machine Learning Research, Watanabe, Sumio, mar, 2013, 867-897, /home/osvaldo/Zotero/storage/424KD4K3/Watanabe - 2013 - A Widely Applicable Bayesian Information Criterion.pdf, 3
4. xarray: N-D labeled Arrays and Datasets in Python
5. NetCDF: an interface for scientific data access